3D Object Retrieval with Multimodal Views

Abstract
This paper reports the results of the SHREC'16 track: 3D Object Retrieval with Multimodal Views, whose goal is to evaluate the performance of retrieval algorithms when multimodal views are employed for 3D object representation. In this task, a collection of 605 objects is generated and both the color images and the depth images are provided for each object. 200 objects including 100 3D printing models and 100 3D real objects are selected as the queries while the other 405 objects are selected as the tests and average retrieval performance is measured. The track attracted seven participants and the submission of 9 runs. Comparing to the last year's results, 3D printing models obviously introduce a bad influence. The performance of this year is worse than that of last year. This condition also shows a promising scenario about multimodal view-based 3D retrieval methods, and reveal interesting insights in dealing with multimodal data.
Description

        
@inproceedings{
10.2312:3dor.20161093
, booktitle = {
Eurographics Workshop on 3D Object Retrieval
}, editor = {
A. Ferreira and A. Giachetti and D. Giorgi
}, title = {{
3D Object Retrieval with Multimodal Views
}}, author = {
Gao, Yue
and
Nie, Weizhi
and
Gao, Zan
and
Hao, Jiayun
and
Ji, Rongrong
and
Li, Haisheng
and
Liu, Mingxia
and
Pan, Lili
and
Qiu, Yu
and
Wei, Liwei
and
Wang, Zhao
and
Wei, Hongjiang
and
Liu, Anan
and
Zhang, Yuyao
and
Zhang, Jun
and
Zhang, Yang
and
Zheng, Yali
and
Su, Yuting
and
Dai, Qionghai
and
An, Le
and
Chen, Fuhai
and
Cao, Liujuan
and
Dong, Shuilong
and
De, Yu
}, year = {
2016
}, publisher = {
The Eurographics Association
}, ISSN = {
1997-0471
}, ISBN = {
978-3-03868-004-8
}, DOI = {
10.2312/3dor.20161093
} }
Citation
Collections