RGB-D to CAD Retrieval with ObjectNN Dataset

Abstract
The goal of this track is to study and evaluate the performance of 3D object retrieval algorithms using RGB-D data. This is inspired from the practical need to pair an object acquired from a consumer-grade depth camera to CAD models available in public datasets on the Internet. To support the study, we propose ObjectNN, a new dataset with well segmented and annotated RGB-D objects from SceneNN [HPN 16] and CAD models from ShapeNet [CFG 15]. The evaluation results show that the RGB-D to CAD retrieval problem, while being challenging to solve due to partial and noisy 3D reconstruction, can be addressed to a good extent using deep learning techniques, particularly, convolutional neural networks trained by multi-view and 3D geometry. The best method in this track scores 82% in accuracy.
Description

        
@inproceedings{
10.2312:3dor.20171048
, booktitle = {
Eurographics Workshop on 3D Object Retrieval
}, editor = {
Ioannis Pratikakis and Florent Dupont and Maks Ovsjanikov
}, title = {{
RGB-D to CAD Retrieval with ObjectNN Dataset
}}, author = {
Hua, Binh-Son
and
Truong, Quang-Trung
and
Johan, Henry
and
Tashiro, Shoki
and
Aono, Masaki
and
Tran, Minh-Triet
and
Pham, Viet-Khoi
and
Nguyen, Hai-Dang
and
Nguyen, Vinh-Tiep
and
Tran, Quang-Thang
and
Phan, Thuyen V.
and
Truong, Bao
and
Tran, Minh-Khoi
and
Do, Minh N.
and
Duong, Anh-Duc
and
Yu, Lap-Fai
and
Nguyen, Duc Thanh
and
Yeung, Sai-Kit
and
Pham, Quang-Hieu
and
Kanezaki, Asako
and
Lee, Tang
and
Chiang, HungYueh
and
Hsu, Winston
and
Li, Bo
and
Lu, Yijuan
}, year = {
2017
}, publisher = {
The Eurographics Association
}, ISSN = {
1997-0471
}, ISBN = {
978-3-03868-030-7
}, DOI = {
10.2312/3dor.20171048
} }
Citation
Collections