3DOR 18
Permanent URI for this collection
Browse
Browsing 3DOR 18 by Title
Now showing 1 - 16 of 16
Results Per Page
Sort Options
Item 2D Image-Based 3D Scene Retrieval(The Eurographics Association, 2018) Abdul-Rashid, Hameed; Yuan, Juefei; Li, Bo; Lu, Yijuan; Bai, Song; Bai, Xiang; Bui, Ngoc-Minh; Do, Minh N.; Do, Trong-Le; Duong, Anh-Duc; He, Xinwei; Le, Tu-Khiem; Li, Wenhui; Liu, Anan; Liu, Xiaolong; Nguyen, Khac-Tuan; Nguyen, Vinh-Tiep; Nie, Weizhi; Ninh, Van-Tu; Su, Yuting; Ton-That, Vinh; Tran, Minh-Triet; Xiang, Shu; Zhou, Heyu; Zhou, Yang; Zhou, Zhichao; Telea, Alex and Theoharis, Theoharis and Veltkamp, Remco2D scene image-based 3D scene retrieval is a new research topic in the field of 3D object retrieval. Given a 2D scene image, it is to search for relevant 3D scenes from a dataset. It has an intuitive and convenient framework which allows users to learn, search, and utilize the retrieved results for vast related applications, such as automatic 3D content generation for 3D movie, game and animation production, robotic vision, and consumer electronics apps development, and autonomous vehicles. To advance this promising research, we organize this SHREC track and build the first 2D scene image-based 3D scene retrieval benchmark by collecting 2D images from ImageNet and 3D scenes from Google 3D Warehouse. The benchmark contains uniformly classified 10,000 2D scene images and 1,000 3D scene models of ten (10) categories. In this track, seven (7) groups from five countries (China, Chile, USA, UK, and Vietnam) have registered for the track, while due to many challenges involved, only three (3) groups have successfully submitted ten (10) runs of five methods. To have a comprehensive comparison, seven (7) commonly-used retrieval performance metrics have been used to evaluate their retrieval performance. We also suggest several future research directions for this research topic. We wish this publicly available [ARYLL18] benchmark, comparative evaluation results and corresponding evaluation code, will further enrich and boost the research of 2D scene image-based 3D scene retrieval and its applications.Item 2D Scene Sketch-Based 3D Scene Retrieval(The Eurographics Association, 2018) Yuan, Juefei; Li, Bo; Lu, Yijuan; Bai, Song; Bai, Xiang; Bui, Ngoc-Minh; Do, Minh N.; Do, Trong-Le; Duong, Anh-Duc; He, Xinwei; Le, Tu-Khiem; Li, Wenhui; Liu, Anan; Liu, Xiaolong; Nguyen, Khac-Tuan; Nguyen, Vinh-Tiep; Nie, Weizhi; Ninh, Van-Tu; Su, Yuting; Ton-That, Vinh; Tran, Minh-Triet; Xiang, Shu; Zhou, Heyu; Zhou, Yang; Zhou, Zhichao; Telea, Alex and Theoharis, Theoharis and Veltkamp, RemcoSketch-based 3D model retrieval has the intuitiveness advantage over other types of retrieval schemes. Currently, there is a lot of research in sketch-based 3D model retrieval, which usually targets the problem of retrieving a list of candidate 3D models using a single sketch as input. 2D scene sketch-based 3D scene retrieval is a brand new research topic in the field of 3D object retrieval. Unlike traditional sketch-based 3D model retrieval which ideally assumes that a query sketch contains only a single object, this is a new 3D model retrieval topic within the context of a 2D scene sketch which contains several objects that may overlap with each other and thus be occluded and also have relative location configurations. It is challenging due to the semantic gap existing between the iconic 2D representation of sketches and more accurate 3D representation of 3D models. But it also has vast applications such as 3D scene reconstruction, autonomous driving cars, 3D geometry video retrieval, and 3D AR/VR Entertainment. Therefore, this research topic deserves our further exploration. To promote this interesting research, we organize this SHREC track and build the first 2D scene sketch-based 3D scene retrieval benchmark by collecting 3D scenes from Google 3D Warehouse and utilizing our previously proposed 2D scene sketch dataset Scene250. The objective of this track is to evaluate the performance of different 2D scene sketch-based 3D scene retrieval algorithms using a 2D sketch query dataset and a 3D Warehouse model dataset. The benchmark contains 250 scene sketches and 1000 3D scene models, and both are equally classified into 10 classes. In this track, six groups from five countries (China, Chile, USA, UK, and Vietnam) have registered for the track, while due to many challenges involved, only 3 groups have successfully submitted 8 runs. The retrieval performance of submitted results has been evaluated using 7 commonly used retrieval performance metrics. We also conduct a thorough analysis and discussion on those methods, and suggest several future research directions to tackle this research problem. We wish this publicly available [YLL18] benchmark, comparative evaluation results and corresponding evaluation code, will further enrich and advance the research of 2D scene sketch-based 3D scene retrieval and its applications.Item 3DOR 2018: Frontmatter(Eurographics Association, 2018) Telea, Alex; Theoharis, Theoharis; Veltkamp, Remco; Telea, Alex; Theoharis, Theoharis; Veltkamp, RemcoItem Automatic Extraction of Complex 3D Structures Application to the Inner Ear Segmentation from Cone Beam CT Digital Volumes(The Eurographics Association, 2018) Beguet, Florian; Mari, Jean-Luc; Cresson, Thierry; Schmittbuhl, Matthieu; Guise, Jacques A. de; Telea, Alex and Theoharis, Theoharis and Veltkamp, RemcoWe present an automatic approach for the retrieval of a complex structure within a 3D digital volume, using a generic deformable surface model. We apply this approach to the inner ear reconstruction of Cone Beam CT(CBCT) 3D data. The proposed method is based on a single prior shape initialization followed by two steps. A geometric rigid adjustment allows a close fit to inner ear boundaries. Finally, a Laplacian mesh deformation method is used to iteratively refine the mesh. Preliminary results are promising in terms of several similarity metrics.Item Completion of Cultural Heritage Objects with Rotational Symmetry(The Eurographics Association, 2018) Sipiran, Ivan; Telea, Alex and Theoharis, Theoharis and Veltkamp, RemcoArchaeological artifacts are an important part of our cultural heritage. They help us understand how our ancestors used to live. Unfortunately, many of these objects are badly damaged by the passage of time and need repair. If the object exhibits some form of symmetry, it is possible to complete the missing regions by replicating existing parts of the object. In this paper, we present a framework to complete 3D objects that exhibit rotational symmetry. Our approach combines a number of algorithms from the computer vision community that have had good performance at solving similar problems. In order to complete an archaeological artifact, we begin by scanning the object to produce a 3D mesh of triangles. We then preprocess the mesh to remove fissures and smoothen the surface. We continue by detecting the most salient vertices of the mesh (the key-points). If the object exhibits rotational symmetry, the key-points should form a circular structure which the Random Sample Consensus (RANSAC) algorithm should be able to detect. The axis of symmetry of the circle found should correspond to the axis of symmetry of the object. Thus, by rotating the mesh around the axis of the circle we should be able to complete a large portion of the missing regions. We alleviate any misalignment caused during the rotations via a non-rigid alignment procedure. In the evaluation, we compare the performance of our approach with other state of the art algorithms for 3D object completion. The benchmark proves that our algorithm is effective at completing damaged archaeological objects.Item Edge-based LBP Description of Surfaces with Colorimetric Patterns(The Eurographics Association, 2018) Thompson, Elia Moscoso; Biasotti, Silvia; Telea, Alex and Theoharis, Theoharis and Veltkamp, RemcoIn this paper we target the problem of the retrieval of colour patterns over surfaces. We generalize to surface tessellations the well known Local Binary Pattern (LBP) descriptor for images. The key concept of the LBP is to code the variability of the colour values around each pixel. In the case of a surface tessellation we adopt rings around vertices that are obtained with a sphere-mesh intersection driven by the edges of the mesh; for this reason, we name our method edgeLBP. Experimental results are provided to show how this description performs well for pattern retrieval, also when patterns come from degraded and corrupted archaeological fragments.Item Experimental Similarity Assessment for a Collection of Fragmented Artifacts(The Eurographics Association, 2018) Biasotti, Silvia; Thompson, Elia Moscoso; Spagnuolo, Michela; Telea, Alex and Theoharis, Theoharis and Veltkamp, RemcoIn the Visual Heritage domain, search engines are expected to support archaeologists and curators to address cross-correlation and searching across multiple collections. Archaeological excavations return artifacts that often are damaged with parts that are fragmented in more pieces or totally missing. The notion of similarity among fragments cannot simply base on the geometric shape but style, material, color, decorations, etc. are all important factors that concur to this concept. In this work, we discuss to which extent the existing techniques for 3D similarity matching are able to approach fragment similarity, what is missing and what is necessary to be further developed.Item Geodesic-based 3D Shape Retrieval Using Sparse Autoencoders(The Eurographics Association, 2018) Luciano, Lorenzo; Hamza, Abdessamad Ben; Telea, Alex and Theoharis, Theoharis and Veltkamp, RemcoIn light of the increased processing power of graphics cards and the availability of large-scale datasets, deep neural networks have shown a remarkable performance in various visual computing applications. In this paper, we propose a geometric framework for unsupervised 3D shape retrieval using geodesic moments and stacked sparse autoencoders. The key idea is to learn deep shape representations in an unsupervised manner. Such discriminative shape descriptors can then be used to compute the pairwise dissimilarities between shapes in a dataset, and to find the retrieved set of the most relevant shapes to a given shape query. Experimental evaluation on three standard 3D shape benchmarks demonstrate the competitive performance of our approach in comparison with state-of-the-art techniques.Item Microshapes: Efficient Querying of 3D Object Collections based on Local Shape(The Eurographics Association, 2018) Blokland, Bart Iver van; Theoharis, Theoharis; Telea, Alex and Theoharis, Theoharis and Veltkamp, RemcoContent-based querying of 3D object collections has the intrinsic difficulty of creating the query object and previous approaches have concentrated in producing global simplifications such as sketches. In contrast, in this paper, the concept of querying 3D object collections based on local shape is introduced. Microshapes are very promising in terms of generality and applicability and is based on a variation of the spin image descriptor. This descriptor uses intersection counts to determine the presence of boundaries in the support volume. These boundaries can be used to recognise local shape similarity. Queries based on this descriptor are general, easy to specify and robust to geometric clutter.Item Non-rigid 3D Model Classification Using 3D Hahn Moment Convolutional Neural Networks(The Eurographics Association, 2018) Mesbah, Abderrahim; Berrahou, Aissam; Hammouchi, Hicham; Berbia, Hassan; Qjidaa, Hassan; Daoudi, Mohamed; Telea, Alex and Theoharis, Theoharis and Veltkamp, RemcoIn this paper, we propose a new architecture of 3D deep neural network called 3D Hahn Moments Convolutional Neural Network (3D HMCNN) to enhance the classification accuracy and reduce the computational complexity of a 3D pattern recognition system. The proposed architecture is derived by combining the concepts of image Hahn moments and convolutional neural network (CNN), frequently utilized in pattern recognition applications. Indeed, the advantages of the moments concerning their global information coding mechanism even in lower orders, along with the high effectiveness of the CNN, are combined to make up the proposed robust network. The aim of this work is to investigate the classification capabilities of 3D HMCNN on small 3D datasets. The experiment simulations with 3D HMCNN have been performed on the articulated parts of McGill 3D shape Benchmark database and SHREC 2011 database. The obtained results show the significantly high performance in the classification rates of the proposed model and its ability to decrease the computational cost by training low number of features generated by the first 3D moments layer.Item Performing Image-like Convolution on Triangular Meshes(The Eurographics Association, 2018) Tortorici, Claudio; Werghi, Naoufel; Berretti, Stefano; Telea, Alex and Theoharis, Theoharis and Veltkamp, RemcoImage convolution with a filtering mask is at the base of several image analysis operations. This is motivated by Mathematical foundations and by the straightforward way the discrete convolution can be computed on a grid-like domain. Extending the convolution operation to the mesh manifold support is a challenging task due to the irregular structure of the mesh connections. In this paper, we propose a computational framework that allows convolutional operations on the mesh. This relies on the idea of ordering the facets of the mesh so that a shift-like operation can be derived. Experiments have been performed with several filter masks (Sobel, Gabor, etc.) showing state-of-the-art results in 3D relief patterns retrieval on the SHREC'17 dataset. We also provide evidence that the proposed framework can enable convolution and pooling-like operations as can be needed for extending Convolutional Neural Networks to 3D meshes.Item Person Re-Identification from Depth Cameras using Skeleton and 3D Face Data(The Eurographics Association, 2018) Pala, Pietro; Seidenari, Lorenzo; Berretti, Stefano; Bimbo, Alberto Del; Telea, Alex and Theoharis, Theoharis and Veltkamp, RemcoIn the typical approach, person re-identification is performed using appearance in 2D still images or videos, thus invalidating any application in which a person may change dress across subsequent acquisitions. For example, this is a relevant scenario for home patient monitoring. Depth cameras enable person re-identification exploiting 3D information that captures biometric cues such as face and characteristic dimensions of the body. Unfortunately, face and skeleton quality is not always enough to grant a correct recognition from depth data. Both features are affected by the pose of the subject and the distance from the camera. In this paper, we propose a model to incorporate a robust skeleton representation with a highly discriminative face feature, weighting samples by their quality. Our method combining face and skeleton data improves rank-1 accuracy compared to individual cues especially on short realistic sequences.Item Protein Shape Retrieval(The Eurographics Association, 2018) Langenfeld, Florent; Axenopoulos, Apostolos; Chatzitofis, Anargyros; Craciun, Daniela; Daras, Petros; Du, Bowen; Giachetti, Andrea; Lai, Yu-kun; Li, Haisheng; Li, Yingbin; Masoumi, Majid; Peng, Yuxu; Rosin, Paul L.; Sirugue, Jeremy; Sun, Li; Thermos, Spyridon; Toews, Matthew; Wei, Yang; Wu, Yujuan; Zhai, Yujia; Zhao, Tianyu; Zheng, Yanping; Montes, Matthieu; Telea, Alex and Theoharis, Theoharis and Veltkamp, RemcoProteins are macromolecules central to biological processes that display a dynamic and complex surface. They display multiple conformations differing by local (residue side-chain) or global (loop or domain) structural changes which can impact drastically their global and local shape. Since the structure of proteins is linked to their function and the disruption of their interactions can lead to a disease state, it is of major importance to characterize their shape. In the present work, we report the performance in enrichment of six shape-retrieval methods (3D-FusionNet, GSGW, HAPT, DEM, SIWKS and WKS) on a 2 267 protein structures dataset generated for this protein shape retrieval track of SHREC'18.Item Recognition of Geometric Patterns Over 3D Models(The Eurographics Association, 2018) Biasotti, S.; Moscoso Thompson, E.; Barthe, L.; Berretti, S.; Giachetti, A.; Lejemble, T.; Mellado, N.; Moustakas, K.; Manolas, Iason; Dimou, Dimitrios; Tortorici, C.; Velasco-Forero, S.; Werghi, N.; Polig, M.; Sorrentino, G.; Hermon, S.; Telea, Alex and Theoharis, Theoharis and Veltkamp, RemcoThis track of the SHREC 2018 originally aimed at recognizing relief patterns over a set of triangle meshes from laser scan acquisitions of archaeological fragments. This track approaches a lively and very challenging problem that remains open after the end of the track. In this report we discuss the challenges to face to successfully address geometric pattern recognition over surfaces; how the existing techniques can go further in this direction, what is currently missing and what is necessary to be further developed.Item Retrieval of Gray Patterns Depicted on 3D Models(The Eurographics Association, 2018) Moscoso Thompson, E.; Tortorici, C.; Werghi, N.; Berretti, S.; Velasco-Forero, S.; Biasotti, S.; Telea, Alex and Theoharis, Theoharis and Veltkamp, RemcoThis paper presents the results of the SHREC'18 track: Retrieval of gray patterns depicted on 3D models. The task proposed in the contest challenges the possibility of retrieving surfaces with the same texture pattern of a given query model. This task, which can be seen as a simplified version of many real world applications, requires a characterization of the surfaces based on local features, rather then considering the surface size and/or bending. All runs submitted to this track are based on feature vectors. The retrieval performances of the runs submitted for evaluation reveal that texture pattern retrieval is a challenging issue. Indeed, a good balance between the size of the pattern and the dimension of the region around a vertex used to locally analyze the color evolution is crucial for pattern description.Item RGB-D Object-to-CAD Retrieval(The Eurographics Association, 2018) Pham, Quang-Hieu; Tran, Minh-Khoi; Li, Wenhui; Xiang, Shu; Zhou, Heyu; Nie, Weizhi; Liu, Anan; Su, Yuting; Tran, Minh-Triet; Bui, Ngoc-Minh; Do, Trong-Le; Ninh, Tu V.; Le, Tu-Khiem; Dao, Anh-Vu; Nguyen, Vinh-Tiep; Do, Minh N.; Duong, Anh-Duc; Hua, Binh-Son; Yu, Lap-Fai; Nguyen, Duc Thanh; Yeung, Sai-Kit; Telea, Alex and Theoharis, Theoharis and Veltkamp, RemcoRecent advances in consumer-grade depth sensors have enable the collection of massive real-world 3D objects. Together with the rise of deep learning, it brings great potential for large-scale 3D object retrieval. In this challenge, we aim to study and evaluate the performance of 3D object retrieval algorithms with RGB-D data. To support the study, we expanded the previous ObjectNN dataset [HTT 17] to include RGB-D objects from both SceneNN [HPN 16] and ScanNet [DCS 17], with the CAD models from ShapeNetSem [CFG 15]. Evaluation results show that while the RGB-D to CAD retrieval problem is indeed challenging due to incomplete RGB-D reconstructions, it can be addressed to a certain extent using deep learning techniques trained on multi-view 2D images or 3D point clouds. The best method in this track has a 82% retrieval accuracy.