Browsing by Author "Li, Yichen"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Augmented Reality for Sculpture Stability Analysis and Conservation(The Eurographics Association, 2020) Henneman, Dennis; Li, Yichen; Ochsendorf, John; Betke, Margrit; Whiting, Emily; Spagnuolo, Michela and Melero, Francisco JavierAugmented reality (AR) technology has provided museum visitors with more immersive experiences, but it has yet to reach its full potential for the conservators and historians who craft the exhibits and protect their cultural heritage. In this paper, we propose ConservatAR, an ongoing project that assists sculpture conservation in AR with physical simulation and data visualization. ConservatAR employs two techniques: a static analysis to predict tipping vulnerabilities for homogeneous sculptures, as well as a dynamic analysis for tipping detection and impact visualization of cracked and non-homogeneous sculptures during a user-controlled collapse. Formative user studies with conservators from the Museum of Fine Arts, Boston evaluate the usability and efficacy of our techniques, providing valuable insight on how AR can be best applied to art conservation.Item Monocular Image Based 3D Model Retrieval(The Eurographics Association, 2019) Li, Wenhui; Liu, Anan; Nie, Weizhi; Song, Dan; Li, Yuqian; Wang, Weijie; Xiang, Shu; Zhou, Heyu; Bui, Ngoc-Minh; Cen, Yunchi; Chen, Zenian; Chung-Nguyen, Huy-Hoang; Diep, Gia-Han; Do, Trong-Le; Doubrovski, Eugeni L.; Duong, Anh-Duc; Geraedts, Jo M. P.; Guo, Haobin; Hoang, Trung-Hieu; Li, Yichen; Liu, Xing; Liu, Zishun; Luu, Duc-Tuan; Ma, Yunsheng; Nguyen, Vinh-Tiep; Nie, Jie; Ren, Tongwei; Tran, Mai-Khiem; Tran-Nguyen, Son-Thanh; Tran, Minh-Triet; Vu-Le, The-Anh; Wang, Charlie C. L.; Wang, Shijie; Wu, Gangshan; Yang, Caifei; Yuan, Meng; Zhai, Hao; Zhang, Ao; Zhang, Fan; Zhao, Sicheng; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoMonocular image based 3D object retrieval is a novel and challenging research topic in the field of 3D object retrieval. Given a RGB image captured in real world, it aims to search for relevant 3D objects from a dataset. To advance this promising research, we organize this SHREC track and build the first monocular image based 3D object retrieval benchmark by collecting 2D images from ImageNet and 3D objects from popular 3D datasets such as NTU, PSB, ModelNet40 and ShapeNet. The benchmark contains classified 21,000 2D images and 7,690 3D objects of 21 categories. This track attracted 9 groups from 4 countries and the submission of 20 runs. To have a comprehensive comparison, 7 commonly-used retrieval performance metrics have been used to evaluate their retrieval performance. The evaluation results show that the supervised cross domain learning get the superior retrieval performance (Best NN is 97.4 %) by bridging the domain gap with label information. However, there is still a big challenge for unsupervised cross domain learning (Best NN is 61.2%), which is more practical for the real application. Although we provided both view images and OBJ file for each 3D model, all the participants use the view images to represent the 3D model. One of the interesting work in the future is directly using the 3D information and 2D RGB information to solve the task of monocular Image based 3D model retrieval.