36-Issue 7
Permanent URI for this collection
Browse
Browsing 36-Issue 7 by Subject "Computing methodologies"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item A Data-Driven Approach for Sketch-Based 3D Shape Retrieval via Similar Drawing-Style Recommendation(The Eurographics Association and John Wiley & Sons Ltd., 2016) Wang, Fei; Lin, Shujin; Luo, Xiaonan; Wu, Hefeng; Wang, Ruomei; Zhou, Fan; Jernej Barbic and Wen-Chieh Lin and Olga Sorkine-HornungSketching is a simple and natural way of expression and communication for humans. For this reason, it gains increasing popularity in human computer interaction, with the emergence of multitouch tablets and styluses. In recent years, sketch-based interactive methods are widely used in many retrieval systems. In particular, a variety of sketch-based 3D model retrieval works have been presented. However, almost all of these works focus on directly matching sketches with the projection views of 3D models, and they suffer from the large differences between the sketch drawing and the views of 3D models, leading to unsatisfying retrieval results. Therefore, in this paper, during the matching procedure in the retrieval, we propose to match the sketch with each 3D model from historical users instead of projection views. Yet since the sketches between the current user and the historical users can have big difference, we also aim to handle users' personalized deviations and differences. To this end, we leverage recommendation algorithms to estimate the drawing style characteristic similarity between the current user and historical users. Experimental results on the Large Scale Sketch Track Benchmark(SHREC14LSSTB) demonstrate that our method outperforms several state-of-the-art methods.Item Printable 3D Trees(The Eurographics Association and John Wiley & Sons Ltd., 2016) Bo, Zhitao; Lu, Lin; Sharf, Andrei; Xia, Yang; Deussen, Oliver; Chen, Baoquan; Jernej Barbic and Wen-Chieh Lin and Olga Sorkine-HornungWith the growing popularity of 3D printing, different shape classes such as fibers and hair have been shown, driving research toward class-specific solutions. Among them, 3D trees are an important class, consisting of unique structures, characteristics and botanical features. Nevertheless, trees are an especially challenging case for 3D manufacturing. They typically consist of non-volumetric patch leaves, an extreme amount of small detail often below printable resolution and are often physically weak to be self-sustainable. We introduce a novel 3D tree printability method which optimizes trees through a set of geometry modifications for manufacturing purposes. Our key idea is to formulate tree modifications as a minimal constrained set which accounts for the visual appearance of the model and its structural soundness. To handle non-printable fine details, our method modifies the tree shape by gradually abstracting details of visible parts while reducing details of non-visible parts. To guarantee structural soundness and to increase strength and stability, our algorithm incorporates a physical analysis and adjusts the tree topology and geometry accordingly while adhering to allometric rules. Our results show a variety of tree species with different complexity that are physically sound and correctly printed within reasonable time. The printed trees are correct in terms of their allometry and of high visual quality, which makes them suitable for various applications in the realm of outdoor design, modeling and manufacturing.Item Saliency-aware Real-time Volumetric Fusion for Object Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2016) Yang, Sheng; Chen, Kang; Liu, Minghua; Fu, Hongbo; Hu, Shi-Min; Jernej Barbic and Wen-Chieh Lin and Olga Sorkine-HornungWe present a real-time approach for acquiring 3D objects with high fidelity using hand-held consumer-level RGB-D scanning devices. Existing real-time reconstruction methods typically do not take the point of interest into account, and thus might fail to produce clean reconstruction results of desired objects due to distracting objects or backgrounds. In addition, any changes in background during scanning, which can often occur in real scenarios, can easily break up the whole reconstruction process. To address these issues, we incorporate visual saliency into a traditional real-time volumetric fusion pipeline. Salient regions detected from RGB-D frames suggest user-intended objects, and by understanding user intentions our approach can put more emphasis on important targets, and meanwhile, eliminate disturbance of non-important objects. Experimental results on realworld scans demonstrate that our system is capable of effectively acquiring geometric information of salient objects in cluttered real-world scenes, even if the backgrounds are changing.Item Split-Depth Image Generation and Optimization(The Eurographics Association and John Wiley & Sons Ltd., 2016) Liao, Jingtang; Eisemann, Martin; Eisemann, Elmar; Jernej Barbic and Wen-Chieh Lin and Olga Sorkine-HornungSplit-depth images use an optical illusion, which can enhance the 3D impression of a 2D animation. In split-depth images (also often called split-depth GIFs due to the commonly used file format), static virtual occluders in form of vertical or horizontal bars are added to a video clip, which leads to occlusions that are interpreted by the observer as a depth cue. In this paper, we study different factors that contribute to the illusion and propose a solution to generate split-depth images for a given RGB + depth image sequence. The presented solution builds upon a motion summarization of the object of interest (OOI) through space and time. It allows us to formulate the bar positioning as an energy-minimization problem, which we solve efficiently. We take a variety of important features into account, such as the changes of the 3D effect due to changes in the motion topology, occlusion, the proximity of bars or the OOI, and scene saliency. We conducted a number of psycho-visual experiments to derive an appropriate energy formulation. Our method helps in finding optimal positions for the bars and, thus, improves the 3D perception of the original animation. We demonstrate the effectiveness of our approach on a variety of examples. Our study with novice users shows that our approach allows them to quickly create satisfying results even for complex animations.Item Video Shadow Removal Using Spatio-temporal Illumination Transfer(The Eurographics Association and John Wiley & Sons Ltd., 2016) Zhang, Ling; Zhu, Yao; Liao, Bin; Xiao, Chunxia; Jernej Barbic and Wen-Chieh Lin and Olga Sorkine-HornungShadow removal for videos is an important and challenging vision task. In this paper, we present a novel shadow removal approach for videos captured by free moving cameras using illumination transfer optimization. We first detect the shadows of the input video using interactive fast video matting. Then, based on the shadow detection results, we decompose the input video into overlapped 2D patches, and find the coherent correspondences between the shadow and non-shadow patches via discrete optimization technique built on the patch similarity metric. We finally remove the shadows of the input video sequences using an optimized illumination transfer method, which reasonably recovers the illumination information of the shadow regions and produces spatio-temporal shadow-free videos. We also process the shadow boundaries to make the transition between shadow and non-shadow regions smooth. Compared with previous works, our method can handle videos captured by free moving cameras and achieve better shadow removal results. We validate the effectiveness of the proposed algorithm via a variety of experiments.