36-Issue 8
Permanent URI for this collection
Browse
Browsing 36-Issue 8 by Title
Now showing 1 - 20 of 48
Results Per Page
Sort Options
Item Approximating Planar Conformal Maps Using Regular Polygonal Meshes(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Chen, Renjie; Gotsman, Craig; Chen, Min and Zhang, Hao (Richard)Continuous conformal maps are typically approximated numerically using a triangle mesh which discretizes the plane. Computing a conformal map subject to user‐provided constraints then reduces to a sparse linear system, minimizing a quadratic ‘conformal energy’. We address the more general case of non‐triangular elements, and provide a complete analysis of the case where the plane is discretized using a mesh of regular polygons, e.g. equilateral triangles, squares and hexagons, whose interiors are mapped using barycentric coordinate functions. We demonstrate experimentally that faster convergence to continuous conformal maps may be obtained this way. We provide a formulation of the problem and its solution using complex number algebra, significantly simplifying the notation. We examine a number of common barycentric coordinate functions and demonstrate that superior approximation to harmonic coordinates of a polygon are achieved by the Moving Least Squares coordinates. We also provide a simple iterative algorithm to invert barycentric maps of regular polygon meshes, allowing to apply them in practical applications, e.g. for texture mapping.Continuous conformal maps are typically approximated numerically using a triangle mesh which discretizes the plane. Computing a conformal map subject to user‐provided constraints then reduces to a sparse linear system, minimizing a quadratic ‘conformal energy’. We address the more general case of non‐triangular elements, and provide a complete analysis of the case where the plane is discretized using a mesh of regular polygons, e.g. equilateral triangles, squares and hexagons, whose interiors are mapped using barycentric coordinate functions. We demonstrate experimentally that faster convergence to continuous conformal maps may be obtained this way. We examine a number of common barycentric coordinate functions and demonstrate that superior approximation to harmonic coordinates of a polygon are achieved by the Moving Least Squares coordinates. We also provide a simple iterative algorithm to invert barycentric maps of regular polygon meshes, allowing to apply them in practical applications, e.g. for texture mapping.Item Articulated‐Motion‐Aware Sparse Localized Decomposition(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Wang, Yupan; Li, Guiqing; Zeng, Zhichao; He, Huayun; Chen, Min and Zhang, Hao (Richard)Compactly representing time‐varying geometries is an important issue in dynamic geometry processing. This paper proposes a framework of sparse localized decomposition for given animated meshes by analyzing the variation of edge lengths and dihedral angles (LAs) of the meshes. It first computes the length and dihedral angle of each edge for poses and then evaluates the difference (residuals) between the LAs of an arbitrary pose and their counterparts in a reference one. Performing sparse localized decomposition on the residuals yields a set of components which can perfectly capture local motion of articulations. It supports intuitive articulation motion editing through manipulating the blending coefficients of these components. To robustly reconstruct poses from altered LAs, we devise a connection‐map‐based algorithm which consists of two steps of linear optimization. A variety of experiments show that our decomposition is truly localized with respect to rotational motions and outperforms state‐of‐the‐art approaches in precisely capturing local articulated motion.Compactly representing time‐varying geometries is an important issue in dynamic geometry processing. This paper proposes a framework of sparse localized decomposition for given animated meshes by analysing the variation of edge lengths and dihedral angles (LAs) of the meshes. It first computes the length and dihedral angle of each edge for poses and then evaluates the difference (residuals) between the LAs of an arbitrary pose and their counterparts in a reference one. Performing sparse localized decomposition on the residuals yields a set of components which can perfectly capture local motion of articulations.Item A Bi‐Directional Procedural Model for Architectural Design(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Hua, H.; Chen, Min and Zhang, Hao (Richard)It is a challenge for shape grammars to incorporate spatial hierarchy and interior connectivity of buildings in early design stages. To resolve this difficulty, we developed a bi‐directional procedural model: the forward process constructs the derivation tree with production rules, while the backward process realizes the tree with shapes in a stepwise manner (from leaves to the root). Each inverse‐derivation step involves essential geometric‐topological reasoning. With this bi‐directional framework, design constraints and objectives are encoded in the grammar‐shape translation. We conducted two applications. The first employs geometric primitives as terminals and the other uses previous designs as terminals. Both approaches lead to consistent interior connectivity and a rich spatial hierarchy. The results imply that bespoke geometric‐topological processing helps shape grammar to create plausible, novel compositions. Our model is more productive than hand‐coded shape grammars, while it is less computation‐intensive than evolutionary treatment of shape grammars.It is a challenge for shape grammars to incorporate spatial hierarchy and interior connectivity of buildings in early design stages. To resolve this difficulty, we developed a bi‐directional procedural model: the forward process constructs the derivation tree with production rules, while the backward process realizes the tree with shapes in a stepwise manner (from leaves to the root). Each inverse‐derivation step involves essential geometric‐topological reasoning. With this bi‐directional framework, design constraints and objectives are encoded in the grammar‐shape translation.Item Building a Large Database of Facial Movements for Deformation Model‐Based 3D Face Tracking(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Sibbing, Dominik; Kobbelt, Leif; Chen, Min and Zhang, Hao (Richard)We introduce a new markerless 3D face tracking approach for 2D videos captured by a single consumer grade camera. Our approach takes detected 2D facial features as input and matches them with projections of 3D features of a deformable model to determine its pose and shape. To make the tracking and reconstruction more robust we add a smoothness prior for pose and deformation changes of the faces. Our major contribution lies in the formulation of the deformation prior which we derive from a large database of facial animations showing different (dynamic) facial expressions of a fairly large number of subjects. We split these animation sequences into snippets of fixed length which we use to predict the facial motion based on previous frames. In order to keep the deformation model compact and independent from the individual physiognomy, we represent it by deformation gradients (instead of vertex positions) and apply a principal component analysis in deformation gradient space to extract the major modes of facial deformation. Since the facial deformation is optimized during tracking, it is particularly easy to apply them to other physiognomies and thereby re‐target the facial expressions. We demonstrate the effectiveness of our technique on a number of examples.We introduce a new markerless 3D face tracking approach for 2D videos captured by a single consumer grade camera. Our approach takes detected 2D facial features as input and matches them with projections of 3D features of a deformable model to determine its pose and shape. To make the tracking and reconstruction more robust we add a smoothness prior for pose and deformation changes of the faces. Our major contribution lies in the formulation of the deformation prior which we derive from a large database of facial animations showing different (dynamic) facial expressions of a fairly large number of subjects. We split these animation sequences into snippets of fixed length which we use to predict the facial motion based on previous frames.Item Category‐Specific Salient View Selection via Deep Convolutional Neural Networks(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Kim, Seong‐heum; Tai, Yu‐Wing; Lee, Joon‐Young; Park, Jaesik; Kweon, In So; Chen, Min and Zhang, Hao (Richard)In this paper, we present a new framework to determine up front orientations and detect salient views of 3D models. The salient viewpoint to human preferences is the most informative projection with correct upright orientation. Our method utilizes two Convolutional Neural Network (CNN) architectures to encode category‐specific information learnt from a large number of 3D shapes and 2D images on the web. Using the first CNN model with 3D voxel data, we generate a CNN shape feature to decide natural upright orientation of 3D objects. Once a 3D model is upright‐aligned, the front projection and salient views are scored by category recognition using the second CNN model. The second CNN is trained over popular photo collections from internet users. In order to model comfortable viewing angles of 3D models, a category‐dependent prior is also learnt from the users. Our approach effectively combines category‐specific scores and classical evaluations to produce a data‐driven viewpoint saliency map. The best viewpoints from the method are quantitatively and qualitatively validated with more than 100 objects from 20 categories. Our thumbnail images of 3D models are the most favoured among those from different approaches.In this paper, we present a new framework to determine up front orientations and detect salient views of 3D models. The salient viewpoint to human preferences is the most informative projection with correct upright orientation. Our method utilizes two Convolutional Neural Network (CNN) architectures to encode category‐specific information learnt from a large number of 3D shapes and 2D images on the web. Using the first CNN model with 3D voxel data, we generate a CNN shape feature to decide natural upright orientation of 3D objects. Once a 3D model is upright‐aligned, the front projection and salient views are scored by category recognition using the second CNN model. The second CNN is trained over popular photo collections from internet users. In order to model comfortable viewing angles of 3D models, a category dependent prior is also learnt from the users. Our approach effectively combines category‐specific scores and classical evaluations to produce a data‐driven viewpoint saliency map. The best viewpoints from the method are quantitatively and qualitatively validated with more than 100 objects from 20 categories. Our thumbnail images of 3D models are the most favored among those from different approaches.Item A Comprehensive Survey on Sampling‐Based Image Matting(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Yao, Guilin; Zhao, Zhijie; Liu, Shaohui; Chen, Min and Zhang, Hao (Richard)Sampling‐based image matting is currently playing a significant role and showing great further development potentials in image matting. However, the consequent survey articles and detailed classifications are still rare in the field of corresponding research. Furthermore, besides sampling strategies, most of the sampling‐based matting algorithms apply additional operations which actually conceal their real sampling performances. To inspire further improvements and new work, this paper makes a comprehensive survey on sampling‐based matting in the following five aspects: (i) Only the sampling step is initially preserved in the matting process to generate the final alpha results and make comparisons. (ii) Four basic categories including eight detailed classes for sampling‐based matting are presented, which are combined to generate the common sampling‐based matting algorithms. (iii) Each category including two classes is analysed and experimented independently on their advantages and disadvantages. (iv) Additional operations, including sampling weight, settling manner, complement and pre‐ and post‐processing, are sequentially analysed and added into sampling. Besides, the result and effect of each operation are also presented. (v) A pure sampling comparison framework is strongly recommended in future work.Sampling‐based image matting is currently playing a significant role and showing great further development potentials in image matting. However, the consequent survey articles and detailed classifications are still rare in the field of corresponding research. Furthermore, besides sampling strategies, most of the sampling‐based matting algorithms apply additional operations which actually conceal their real sampling performances. To inspire further improvements and new work, this paper makes a comprehensive survey on sampling‐based matting in the following five aspects: (i) Only the sampling step is initially preserved in the matting process to generate the final alpha results and make comparisons. (ii) Four basic categories including eight detailed classes for sampling‐based matting are presented, which are combined to generate the common sampling‐based matting algorithms. (iii) Each category including two classes is analysed and experimented independently on their advantages and disadvantages. (iv) Additional operations, including sampling weight, settling manner, complement and pre‐ and post‐processing, are sequentially analysed and added into sampling. Besides, the result and effect of each operation are also presented. (v) A pure sampling comparison framework is strongly recommended in future work.Item Contracting Medial Surfaces Isotropically for Fast Extraction of Centred Curve Skeletons(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Li, Lei; Wang, Wencheng; Chen, Min and Zhang, Hao (Richard)Curve skeletons, which are a compact representation for three‐dimensional shapes, must be extracted such that they are high quality, centred and smooth. However, the centredness measurements in existing methods are expensive, lowering the extraction efficiency. Although some methods trade quality for acceleration, their generated low‐quality skeletons are not suitable for applications. In this paper, we present a method to quickly extract centred curve skeletons. It operates by contracting the medial surface isotropically to the locus of the centres of its maximal inscribed spheres, which are spheres that have their centres on the medial surface and cannot be further enlarged while remaining the boundary of their intersections with the medial surface composed of only the points on the sphere surfaces. Thus, the centred curve skeleton can be extracted conveniently. For fast extraction, we develop novel measures to quickly generate the medial surface and contract it layer by layer, with every layer contracted isotropically using spheres of equal radii to account for every part of the medial surface boundary. The experimental results show that we can stably extract curve skeletons with higher centredness and at much higher speeds than existing methods, even for noisy shapes.Curve skeletons, which are a compact representation for three‐dimensional shapes, must be extracted such that they are high quality, centred and smooth. However, the centredness measurements in existing methods are expensive, lowering the extraction efficiency. Although some methods trade quality for acceleration, their generated low‐quality skeletons are not suitable for applications. In this paper, we present a method to quickly extract centred curve skeletons. It operates by contracting the medial surface isotropically to the locus of the centres of its maximal inscribed spheres, which are spheres that have their centres on the medial surface and cannot be further enlarged while remaining the boundary of their intersections with the medial surface composed of only the points on the sphere surfaces.Item Convolutional Sparse Coding for Capturing High‐Speed Video Content(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Serrano, Ana; Garces, Elena; Masia, Belen; Gutierrez, Diego; Chen, Min and Zhang, Hao (Richard)Video capture is limited by the trade‐off between spatial and temporal resolution: when capturing videos of high temporal resolution, the spatial resolution decreases due to bandwidth limitations in the capture system. Achieving both high spatial temporal resolution is only possible with highly specialized and very expensive hardware, and even then the same basic trade‐off remains. The recent introduction of compressive sensing and sparse reconstruction techniques allows for the capture of high‐speed video, by coding the temporal information in a single frame, and then reconstructing the full video sequence from this single‐coded image and a trained dictionary of image patches. In this paper, we first analyse this approach, and find insights that help improve the quality of the reconstructed videos. We then introduce a novel technique, based on (CSC), and show how it outperforms the state‐of‐the‐art, patch‐based approach in terms of flexibility and efficiency, due to the convolutional nature of its filter banks. The key idea for CSC high‐speed video acquisition is extending the basic formulation by imposing an additional constraint in the temporal dimension, which enforces sparsity of the first‐order derivatives over time.Video capture is limited by the trade‐off between spatial and temporal resolution: when capturing videos of high temporal resolution, the spatial resolution decreases due to bandwidth limitations in the capture system. Achieving both high spatial and temporal resolution is only possible with highly specialized and very expensive hardware, and even then the same basic trade‐off remains. .Item Data‐Driven Shape Interpolation and Morphing Editing(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Gao, Lin; Chen, Shu‐Yu; Lai, Yu‐Kun; Xia, Shihong; Chen, Min and Zhang, Hao (Richard)Shape interpolation has many applications in computer graphics such as morphing for computer animation. In this paper, we propose a novel data‐driven mesh interpolation method. We adapt patch‐based linear rotational invariant coordinates to effectively represent deformations of models in a shape collection, and utilize this information to guide the synthesis of interpolated shapes. Unlike previous data‐driven approaches, we use a rotation/translation invariant representation which defines the plausible deformations in a global continuous space. By effectively exploiting the knowledge in the shape space, our method produces realistic interpolation results at interactive rates, outperforming state‐of‐the‐art methods for challenging cases. We further propose a novel approach to interactive editing of shape morphing according to the shape distribution. The user can explore the morphing path and select example models intuitively and adjust the path with simple interactions to edit the morphing sequences. This provides a useful tool to allow users to generate desired morphing with little effort. We demonstrate the effectiveness of our approach using various examples.Shape interpolation has many applications in computer graphics such as morphing for computer animation. In this paper, we propose a novel data‐driven mesh interpolation method. We adapt patch‐based linear rotational invariant coordinates to effectively represent deformations of models in a shape collection, and utilize this information to guide the synthesis of interpolated shapes. Unlike previous data‐driven approaches, we use a rotation/translation invariant representation which defines the plausible deformations in a global continuous space. By effectively exploiting the knowledge in the shape space, our method produces realistic interpolation results at interactive rates, outperforming state‐of‐the‐art methods for challenging cases.Item Deformation Grammars: Hierarchical Constraint Preservation Under Deformation(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Vimont, Ulysse; Rohmer, Damien; Begault, Antoine; Cani, Marie‐Paule; Chen, Min and Zhang, Hao (Richard)Deformation grammars are a novel procedural framework enabling to sculpt hierarchical 3D models in an object‐dependent manner. They process object deformations as symbols thanks to user‐defined interpretation rules. We use them to define hierarchical deformation behaviours tailored for each model, and enabling any sculpting gesture to be interpreted as some adapted constraint‐preserving deformation. A variety of object‐specific constraints can be enforced using this framework, such as maintaining distributions of subparts, avoiding self‐penetrations or meeting semantic‐based user‐defined rules. The operations used to maintain constraints are kept transparent to the user, enabling them to focus on their design. We demonstrate the feasibility and the versatility of this approach on a variety of examples, implemented within an interactive sculpting system.Deformation grammars are a novel procedural framework enabling to sculpt hierarchical 3D models in an object‐dependent manner. They process object deformations as symbols thanks to user‐defined interpretation rules. We use them to define hierarchical deformation behaviours tailored for each model,.Item Detail‐Preserving Explicit Mesh Projection and Topology Matching for Particle‐Based Fluids(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Dagenais, F.; Gagnon, J.; Paquette, E.; Chen, Min and Zhang, Hao (Richard)We propose a new explicit surface tracking approach for particle‐based fluid simulations. Our goal is to advect and update a highly detailed surface, while only computing a coarse simulation. Current explicit surface methods lose surface details when projecting on the isosurface of an implicit function built from particles. Our approach uses a detail‐preserving projection, based on a signed distance field, to prevent the divergence of the explicit surface without losing its initial details. Furthermore, we introduce a novel topology matching stage that corrects the topology of the explicit surface based on the topology of an implicit function. To that end, we introduce an optimization approach to update our explicit mesh signed distance field before remeshing. Our approach is successfully used to preserve the surface details of melting and highly viscous objects, and shown to be stable by handling complex cases involving multiple topological changes. Compared to the computation of a high‐resolution simulation, using our approach with a coarse fluid simulation significantly reduces the computation time and improves the quality of the resulting surface.We propose a new explicit surface tracking approach for particle‐based fluid simulations. Our goal is to advect and update a highly detailed surface, while only computing a coarse simulation. Current explicit surface methods lose surface details when projecting on the isosurface of an implicit function built from particles. Our approach uses a detail‐preserving projection, based on a signed distance field, to prevent the divergence of the explicit surface without losing its initial details. Furthermore, we introduce a novel topology matching stage that corrects the topology of the explicit surface based on the topology of an implicit function. To that end, we introduce an optimization approach to update our explicit mesh signed distance field before remeshing.Item Distributed Optimization Framework for Shadow Removal in Multi‐Projection Systems(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Tsukamoto, J.; Iwai, D.; Kashima, K.; Chen, Min and Zhang, Hao (Richard)This paper proposes a novel shadow removal technique for cooperative projection system based on spatiotemporal prediction. In our previous work, we proposed a distributed feedback algorithm, which is implementable in cooperative projection environments subject to data transfer constraints between components. A weakness of this scheme is that the compensation is conducted in each pixel independently. As a result, spatiotemporal information of the environmental change cannot be utilized even if it is available. In view of this, we specifically investigate the situation where some of the projectors are occluded by a moving object whose one‐frame‐ahead behaviour is predictable. In order to remove the resulting shadow, we propose a novel error propagating scheme that is still implementable in a distributed manner and enables us to incorporate the prediction information of the obstacle. It is demonstrated theoretically and experimentally that the proposed method significantly improves the shadow removal performance in comparison to the previous work.This paper proposes a novel shadow removal technique for cooperative projection system based on spatiotemporal prediction. In our previous work, we proposed a distributed feedback algorithm, which is implementable in cooperative projection environments subject to data transfer constraints between components. A weakness of this scheme is that the compensation is conducted in each pixel independently. As a result, spatiotemporal information of the environmental change cannot be utilized even if it is available. In view of this, we specifically investigate the situation where some of projectors are occluded by a moving object whose one‐frame‐ahead behaviour is predictable.Item DYVERSO: A Versatile Multi‐Phase Position‐Based Fluids Solution for VFX(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Alduán, Iván; Tena, Angel; Otaduy, Miguel A.; Chen, Min and Zhang, Hao (Richard)Many impressive fluid simulation methods have been presented in research papers before. These papers typically focus on demonstrating particular innovative features, but they do not meet in a comprehensive manner the production demands of actual VFX pipelines. VFX artists seek methods that are flexible, efficient, robust and scalable, and these goals often conflict with each other. In this paper, we present a multi‐phase particle‐based fluid simulation framework, based on the well‐known Position‐Based Fluids (PBF) method, designed to address VFX production demands. Our simulation framework handles multi‐phase interactions robustly thanks to a modified constraint formulation for density contrast PBF. And, it also supports the interaction of fluids sampled at different resolutions. We put special care on data structure design and implementation details. Our framework highlights cache‐efficient GPU‐friendly data structures, an improved spatial voxelization technique based on Z‐index sorting, tuned‐up simulation algorithms and two‐way‐coupled collision handling based on VDB fields. Altogether, our fluid simulation framework empowers artists with the efficiency, scalability and versatility needed for simulating very diverse scenes and effects.Many impressive fluid simulation methods have been presented in research papers before. These papers typically focus on demonstrating particular innovative features, but they do not meet in a comprehensive manner the production demands of actual VFX pipelines. VFX artists seek methods that are flexible, efficient, robust and scalable, and these goals often conflict with each other. In this paper, we present a multi‐phase particle‐based fluid simulation framework, based on the well‐known Position‐Based Fluids (PBF) method, designed to address VFX production demands.Item EACS: Effective Avoidance Combination Strategy(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Bruneau, J.; Pettré, J.; Chen, Min and Zhang, Hao (Richard)When navigating in crowds, humans are able to move efficiently between people. They look ahead to know which path would reduce the complexity of their interactions with others. Current navigation systems for virtual agents consider long‐term planning to find a path in the static environment and short‐term reactions to avoid collisions with close obstacles. Recently some mid‐term considerations have been added to avoid high density areas. However, there is no mid‐term planning among static and dynamic obstacles that would enable the agent to look ahead and avoid difficult paths or find easy ones as humans do. In this paper, we present a system for such mid‐term planning. This system is added to the navigation process between pathfinding and local avoidance to improve the navigation of virtual agents. We show the capacities of such a system using several case studies. Finally we use an energy criterion to compare trajectories computed with and without the mid‐term planning.When navigating in crowds, humans are able to move efficiently between people. They look ahead to know which path would reduce the complexity of their interactions with others. Current navigation systems for virtual agents consider long‐term planning to find a path in the static environment and short‐term reactions to avoid collisions with close obstacles. Recently some mid‐term considerations have been added to avoid high density areas. However, there is no mid‐term planning among static and dynamic obstacles that would enable the agent to look ahead and avoid difficult paths or find easy ones as humans do. In this paper, we present a system for such mid‐term planning.Item Efficient and Reliable Self‐Collision Culling Using Unprojected Normal Cones(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Wang, Tongtong; Liu, Zhihua; Tang, Min; Tong, Ruofeng; Manocha, Dinesh; Chen, Min and Zhang, Hao (Richard)We present an efficient and accurate algorithm for self‐collision detection in deformable models. Our approach can perform discrete and continuous collision queries on triangulated meshes. We present a simple and linear time algorithm to perform the normal cone test using the unprojected 3D vertices, which reduces to a sequence point‐plane classification tests. Moreover, we present a hierarchical traversal scheme that can significantly reduce the number of normal cone tests and the memory overhead using front‐based normal cone culling. The overall algorithm can reliably detect all (self) collisions in models composed of hundreds of thousands of triangles. We observe considerable performance improvement over prior continuous collision detection algorithms.We present an efficient and accurate algorithm for self‐collision detection in deformable models. Our approach can perform discrete and continuous collision queries on triangulated meshes. We present a simple and linear time algorithm to perform the normal cone test using the unprojected 3D vertices, which reduces to a sequence point‐plane classification tests. Moreover, we present a hierarchical traversal scheme that can significantly reduce the number of normal cone tests and the memory overhead using front‐based normal cone culling. The overall algorithm can reliably detect all (self) collisions in models composed of hundreds of thousands of triangles. We observe considerable performance improvement over prior continuous collision detection algorithms.Item Enhancing Urban Façades via LiDAR‐Based Sculpting(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Peethambaran, Jiju; Wang, Ruisheng; Chen, Min and Zhang, Hao (Richard)Buildings with symmetrical façades are ubiquitous in urban landscapes and detailed models of these buildings enhance the visual realism of digital urban scenes. However, a vast majority of the existing urban building models in web‐based 3D maps such as Google earth are either less detailed or heavily rely on texturing to render the details. We present a new framework for enhancing the details of such coarse models, using the geometry and symmetry inferred from the light detection and ranging (LiDAR) scans and 2D templates. The user‐defined 2D templates, referred to as coded planar meshes (CPMs), encodes the geometry of the smallest repeating 3D structures of the façades via face codes. Our encoding scheme, take into account the directions, type as well as the offset distance of the sculpting to be applied at the respective locations on the coarse model. In our approach, LiDAR scan is registered with the coarse models taken from Google earth 3D or Bing maps 3D and decomposed into dominant planar segments (each representing the frontal or lateral walls of the building). The façade segments are then split into horizontal and vertical tiles using a weighted point count function defined over the window or door boundaries. This is followed by an automatic identification of CPM locations with the help of a template fitting algorithm that respects the alignment regularity as well as the inter‐element spacing on the façade layout. Finally, 3D boolean sculpting operations are applied over the boxes induced by CPMs and the coarse model, and a detailed 3D model is generated. The proposed framework is capable of modelling details even with occluded scans and enhances not only the frontal façades (facing to the streets) but also the lateral façades of the buildings. We demonstrate the potentials of the proposed framework by providing several examples of enhanced Google earth models and highlight the advantages of our method when designing photo‐realistic urban façades.Buildings with symmetrical façades are ubiquitous in urban landscapes and detailed models of these buildings enhance the visual realism of digital urban scenes. However, a vast majority of the existing urban building models in web‐based 3D maps such as Google earth are either less detailed or heavily rely on texturing to render the details. We present a new framework for enhancing the details of such coarse models, using the geometry and symmetry inferred from the light detection and ranging (LiDAR) scans and 2D templates. The user‐defined 2D templates, referred to as coded planar meshes (CPMs), encodes the geometry of the smallest repeating 3D structures of the façades via face codes.Item Extracting Sharp Features from RGB‐D Images(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Cao, Y‐P.; Ju, T.; Xu, J.; Hu, S‐M.; Chen, Min and Zhang, Hao (Richard)Sharp edges are important shape features and their extraction has been extensively studied both on point clouds and surfaces. We consider the problem of extracting sharp edges from a sparse set of colour‐and‐depth (RGB‐D) images. The noise‐ridden depth measurements are challenging for existing feature extraction methods that work solely in the geometric domain (e.g. points or meshes). By utilizing both colour and depth information, we propose a novel feature extraction method that produces much cleaner and more coherent feature lines. We make two technical contributions. First, we show that intensity edges can augment the depth map to improve normal estimation and feature localization from a single RGB‐D image. Second, we designed a novel algorithm for consolidating feature points obtained from multiple RGB‐D images. By utilizing normals and ridge/valley types associated with the feature points, our algorithm is effective in suppressing noise without smearing nearby features.Sharp edges are important shape features and their extraction has been extensively studied both on point clouds and surfaces. We consider the problem of extracting sharp edges from a sparse set of colour‐and‐depth (RGB‐D) images. The noise‐ridden depth measurements are challenging for existing feature extraction methods that work solely in the geometric domain (e.g. points or meshes). By utilizing both colour and depth information, we propose a novel feature extraction method that produces much cleaner and more coherent feature lines. We make two technical contributions. First, we show that intensity edges can augment the depth map to improve normal estimation and feature localization from a single RGB‐D image. Second, we designed a novel algorithm for consolidating feature points obtained from multiple RGB‐D images. By utilizing normals and ridge/valley types associated with the feature points, our algorithm is effective in suppressing noise without smearing nearby features.Item Flow‐Based Temporal Selection for Interactive Volume Visualization(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Frey, S.; Ertl, T.; Chen, Min and Zhang, Hao (Richard)We present an approach to adaptively select time steps from time‐dependent volume data sets for an integrated and comprehensive visualization. This reduced set of time steps not only saves cost, but also allows to show both the spatial structure and temporal development in one combined rendering. Our selection optimizes the coverage of the complete data on the basis of a minimum‐cost flow‐based technique to determine meaningful distances between time steps. As both optimal solutions of the involved transport and selection problem are prohibitively expensive, we present new approaches that are significantly faster with only minor deviations. We further propose an adaptive scheme for the progressive incorporation of new time steps. An interactive volume raycaster produces an integrated rendering of the selected time steps, and their computed differences are visualized in a dedicated chart to provide additional temporal similarity information. We illustrate and discuss the utility of our approach by means of different data sets from measurements and simulation.We present an approach to adaptively select time steps from time‐dependent volume data sets for an integrated and comprehensive visualization. This reduced set of time steps not only saves cost, but also allows to show both the spatial structure and temporal development in one combined rendering. Our selection optimizes the coverage of the complete data on the basis of a minimum‐cost flow‐based technique to determine meaningful distances between time steps. As both optimal solutions of the involved transport and selection problem are prohibitively expensive, we present new approaches that are significantly faster with only minor deviations. We further propose an adaptive scheme for the progressive incorporation of new time steps.Item Geometric Detection Algorithms for Cavities on Protein Surfaces in Molecular Graphics: A Survey(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Simões, Tiago; Lopes, Daniel; Dias, Sérgio; Fernandes, Francisco; Pereira, João; Jorge, Joaquim; Bajaj, Chandrajit; Gomes, Abel; Chen, Min and Zhang, Hao (Richard)Detecting and analysing protein cavities provides significant information about active sites for biological processes (e.g. protein–protein or protein–ligand binding) in molecular graphics and modelling. Using the three‐dimensional (3D) structure of a given protein (i.e. atom types and their locations in 3D) as retrieved from a PDB (Protein Data Bank) file, it is now computationally viable to determine a description of these cavities. Such cavities correspond to pockets, clefts, invaginations, voids, tunnels, channels and grooves on the surface of a given protein. In this work, we survey the literature on protein cavity computation and classify algorithmic approaches into three categories: evolution‐based, energy‐based and geometry‐based. Our survey focuses on geometric algorithms, whose taxonomy is extended to include not only sphere‐, grid‐ and tessellation‐based methods, but also surface‐based, hybrid geometric, consensus and time‐varying methods. Finally, we detail those techniques that have been customized for GPU (graphics processing unit) computing.Detecting and analysing protein cavities provides significant information about active sites for biological processes (e.g. protein–protein or protein–ligand binding) in molecular graphics and modelling. Using the three‐dimensional (3D) structure of a given protein (i.e. atom types and their locations in 3D) as retrieved from a PDB (Protein Data Bank) file, it is now computationally viable to determine a description of these cavities. Such cavities correspond to pockets, clefts, invaginations, voids, tunnels, channels and grooves on the surface of a given protein. In this work, we survey the literature on protein cavity computation and classify algorithmic approaches into three categories: evolution‐based, energy‐based and geometry‐based.Item Group Modeling: A Unified Velocity‐Based Approach(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Ren, Z.; Charalambous, P.; Bruneau, J.; Peng, Q.; Pettré, J.; Chen, Min and Zhang, Hao (Richard)Crowd simulators are commonly used to populate movie or game scenes in the entertainment industry. Even though it is crucial to consider the presence of groups for the believability of a virtual crowd, most crowd simulations only take into account individual characters or a limited set of group behaviors. We introduce a unified solution that allows for simulations of crowds that have diverse group properties such as social groups, marches, tourists and guides, etc. We extend the Velocity Obstacle approach for agent‐based crowd simulations by introducing Velocity Connection; the set of velocities that keep agents moving together while avoiding collisions and achieving goals. We demonstrate our approach to be robust, controllable, and able to cover a large set of group behaviors.Crowd simulators are commonly used to populate movie or game scenes in the entertainment industry. Even though it is crucial to consider the presence of groups for the believability of a virtual crowd, most crowd simulations only take into account individual characters or a limited set of group behaviors. We introduce a unified solution that allows for simulations of crowds that have diverse group properties such as social groups, marches, tourists and guides, etc. We extend the Velocity Obstacle approach for agent‐based crowd simulations by introducing Velocity Connection; the set of velocities that keep agents moving together while avoiding collisions and achieving goals. We demonstrate our approach to be robust, controllable, and able to cover a large set of group behaviors.
- «
- 1 (current)
- 2
- 3
- »