3DOR 19
Permanent URI for this collection
Browse
Browsing 3DOR 19 by Subject "Information Systems"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Extended 2D Scene Sketch-Based 3D Scene Retrieval(The Eurographics Association, 2019) Yuan, Juefei; Abdul-Rashid, Hameed; Li, Bo; Lu, Yijuan; Schreck, Tobias; Bui, Ngoc-Minh; Do, Trong-Le; Nguyen, Khac-Tuan; Nguyen, Thanh-An; Nguyen, Vinh-Tiep; Tran, Minh-Triet; Wang, Tianyang; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoSketch-based 3D scene retrieval is to retrieve 3D scene models given a user's hand-drawn 2D scene sketch. It is a brand new but also very challenging research topic in the field of 3D object retrieval due to the semantic gap in their representations: 3D scene models or views differ from non-realistic 2D scene sketches. To boost this interesting research, we organized a 2D Scene Sketch-Based 3D Scene Retrieval track in SHREC'18, resulting a SceneSBR18 benchmark which contains 10 scene classes. In order to make it more comprehensive, we have extended the number of the scene categories from the initial 10 classes in the SceneSBR2018 benchmark to 30 classes, resulting in a new and more challenging benchmark SceneSBR2019 which has 750 2D scene sketches and 3,000 3D scene models. Therefore, the objective of this track is to further evaluate the performance and scalability of different 2D scene sketch-based 3D scene model retrieval algorithms using this extended and more comprehensive new benchmark. In this track, two groups from USA and Vietnam have successfully submitted 4 runs. Based on 7 commonly used retrieval metrics, we evaluate their retrieval performance. We have also conducted a comprehensive analysis and discussion of these methods and proposed several future research directions to deal with this challenging research topic. Deep learning techniques have been proved their great potentials again in dealing with this challenging retrieval task, in terms of both retrieval accuracy and scalability to a larger dataset. We hope this publicly available benchmark, together with its evaluation results and source code, will further enrich and promote 2D scene sketch-based 3D scene retrieval research area and its corresponding applications.Item Monocular Image Based 3D Model Retrieval(The Eurographics Association, 2019) Li, Wenhui; Liu, Anan; Nie, Weizhi; Song, Dan; Li, Yuqian; Wang, Weijie; Xiang, Shu; Zhou, Heyu; Bui, Ngoc-Minh; Cen, Yunchi; Chen, Zenian; Chung-Nguyen, Huy-Hoang; Diep, Gia-Han; Do, Trong-Le; Doubrovski, Eugeni L.; Duong, Anh-Duc; Geraedts, Jo M. P.; Guo, Haobin; Hoang, Trung-Hieu; Li, Yichen; Liu, Xing; Liu, Zishun; Luu, Duc-Tuan; Ma, Yunsheng; Nguyen, Vinh-Tiep; Nie, Jie; Ren, Tongwei; Tran, Mai-Khiem; Tran-Nguyen, Son-Thanh; Tran, Minh-Triet; Vu-Le, The-Anh; Wang, Charlie C. L.; Wang, Shijie; Wu, Gangshan; Yang, Caifei; Yuan, Meng; Zhai, Hao; Zhang, Ao; Zhang, Fan; Zhao, Sicheng; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoMonocular image based 3D object retrieval is a novel and challenging research topic in the field of 3D object retrieval. Given a RGB image captured in real world, it aims to search for relevant 3D objects from a dataset. To advance this promising research, we organize this SHREC track and build the first monocular image based 3D object retrieval benchmark by collecting 2D images from ImageNet and 3D objects from popular 3D datasets such as NTU, PSB, ModelNet40 and ShapeNet. The benchmark contains classified 21,000 2D images and 7,690 3D objects of 21 categories. This track attracted 9 groups from 4 countries and the submission of 20 runs. To have a comprehensive comparison, 7 commonly-used retrieval performance metrics have been used to evaluate their retrieval performance. The evaluation results show that the supervised cross domain learning get the superior retrieval performance (Best NN is 97.4 %) by bridging the domain gap with label information. However, there is still a big challenge for unsupervised cross domain learning (Best NN is 61.2%), which is more practical for the real application. Although we provided both view images and OBJ file for each 3D model, all the participants use the view images to represent the 3D model. One of the interesting work in the future is directly using the 3D information and 2D RGB information to solve the task of monocular Image based 3D model retrieval.