36-Issue 8
Permanent URI for this collection
Browse
Browsing 36-Issue 8 by Issue Date
Now showing 1 - 20 of 48
Results Per Page
Sort Options
Item Noise Reduction on G‐Buffers for Monte Carlo Filtering(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Moon, Bochang; Iglesias‐Guitian, Jose A.; McDonagh, Steven; Mitchell, Kenny; Chen, Min and Zhang, Hao (Richard)We propose a novel pre‐filtering method that reduces the noise introduced by depth‐of‐field and motion blur effects in geometric buffers (G‐buffers) such as texture, normal and depth images. Our pre‐filtering uses world positions and their variances to effectively remove high‐frequency noise while carefully preserving high‐frequency edges in the G‐buffers. We design a new anisotropic filter based on a per‐pixel covariance matrix of world position samples. A general error estimator, Stein's unbiased risk estimator, is then applied to estimate the optimal trade‐off between the bias and variance of pre‐filtered results. We have demonstrated that our pre‐filtering improves the results of existing filtering methods numerically and visually for challenging scenes where depth‐of‐field and motion blurring introduce a significant amount of noise in the G‐buffers.We propose a novel pre‐filtering method that reduces the noise introduced by depth‐of‐field and motion blur effects in geometric buffers (G‐buffers) such as texture, normal and depth images. Our pre‐filtering uses world positions and their variances to effectively remove high‐frequency noise while carefully preserving high‐frequency edges in the G‐buffers. We design a new anisotropic filter based on a per‐pixel covariance matrix of world position samples. A general error estimator, Stein's unbiased risk estimator, is then applied to estimate the optimal trade‐off between the bias and variance of pre‐filtered results.Item EACS: Effective Avoidance Combination Strategy(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Bruneau, J.; Pettré, J.; Chen, Min and Zhang, Hao (Richard)When navigating in crowds, humans are able to move efficiently between people. They look ahead to know which path would reduce the complexity of their interactions with others. Current navigation systems for virtual agents consider long‐term planning to find a path in the static environment and short‐term reactions to avoid collisions with close obstacles. Recently some mid‐term considerations have been added to avoid high density areas. However, there is no mid‐term planning among static and dynamic obstacles that would enable the agent to look ahead and avoid difficult paths or find easy ones as humans do. In this paper, we present a system for such mid‐term planning. This system is added to the navigation process between pathfinding and local avoidance to improve the navigation of virtual agents. We show the capacities of such a system using several case studies. Finally we use an energy criterion to compare trajectories computed with and without the mid‐term planning.When navigating in crowds, humans are able to move efficiently between people. They look ahead to know which path would reduce the complexity of their interactions with others. Current navigation systems for virtual agents consider long‐term planning to find a path in the static environment and short‐term reactions to avoid collisions with close obstacles. Recently some mid‐term considerations have been added to avoid high density areas. However, there is no mid‐term planning among static and dynamic obstacles that would enable the agent to look ahead and avoid difficult paths or find easy ones as humans do. In this paper, we present a system for such mid‐term planning.Item Virtual Inflation of the Cerebral Artery Wall for the Integrated Exploration of OCT and Histology Data(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Glaßer, S.; Hoffmann, T.; Boese, A.; Voß, S.; Kalinski, T.; Skalej, M.; Preim, B.; Chen, Min and Zhang, Hao (Richard)Intravascular imaging provides new insights into the condition of vessel walls. This is crucial for cerebrovascular diseases including stroke and cerebral aneurysms, where it may present an important factor for indication of therapy. In this work, we provide new information of cerebral artery walls by combining ex vivo optical coherence tomography (OCT) imaging with histology data sets. To overcome the obstacles of deflated and collapsed vessels due to the missing blood pressure, the lack of co‐alignment as well as the geometrical shape deformations due to catheter probing, we developed the new image processing method . We locally sample the vessel wall thickness based on the (deflated) vessel lumen border instead of the vessel's centerline. Our method is embedded in a multi‐view framework where correspondences between OCT and histology can be highlighted via brushing and linking yielding OCT signal characteristics of the cerebral artery wall and its pathologies. Finally, we enrich the data views with a hierarchical clustering representation which is linked via virtual inflation and further supports the deduction of vessel wall pathologies.Intravascular imaging provides new insights into the condition of vessel walls. This is crucial for cerebrovascular diseases including stroke and cerebral aneurysms, where it may present an important factor for indication of therapy. In this work, we provide new information of cerebral artery walls by combining ex vivo optical coherence tomography (OCT) imaging with histology data sets. To overcome the obstacles of deflated and collapsed vessels due to the missing blood pressure, the lack of co‐alignment as well as the geometrical shape deformations due to catheter probing, we developed the new image processing method .Item Hexahedral Meshing With Varying Element Sizes(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Xu, Kaoji; Gao, Xifeng; Deng, Zhigang; Chen, Guoning; Chen, Min and Zhang, Hao (Richard)Hexahedral (or Hex‐) meshes are preferred in a number of scientific and engineering simulations and analyses due to their desired numerical properties. Recent state‐of‐the‐art techniques can generate high‐quality hex‐meshes. However, they typically produce hex‐meshes with uniform element sizes and thus may fail to preserve small‐scale features on the boundary surface. In this work, we present a new framework that enables users to generate hex‐meshes with varying element sizes so that small features will be filled with smaller and denser elements, while the transition from smaller elements to larger ones is smooth, compared to the octree‐based approach. This is achieved by first detecting regions of interest (ROIs) of small‐scale features. These ROIs are then magnified using the as‐rigid‐as‐possible deformation with either an automatically determined or a user‐specified scale factor. A hex‐mesh is then generated from the deformed mesh using existing approaches that produce hex‐meshes with uniform‐sized elements. This initial hex‐mesh is then mapped back to the original volume before magnification to adjust the element sizes in those ROIs. We have applied this framework to a variety of man‐made and natural models to demonstrate its effectiveness.Hexahedral (or Hex‐) meshes are preferred in a number of scientific and engineering simulations and analyses due to their desired numerical properties. Recent state‐of‐the‐art techniques can generate high‐quality hex‐meshes. However, they typically produce hex‐meshes with uniform element sizes and thus may fail to preserve small‐scale features on the boundary surface. In this work, we present a new framework that enables users to generate hex‐meshes with varying element sizes so that small features will be filled with smaller and denser elements, while the transition from smaller elements to larger ones is smooth, compared to the octree‐based approach.Item Articulated‐Motion‐Aware Sparse Localized Decomposition(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Wang, Yupan; Li, Guiqing; Zeng, Zhichao; He, Huayun; Chen, Min and Zhang, Hao (Richard)Compactly representing time‐varying geometries is an important issue in dynamic geometry processing. This paper proposes a framework of sparse localized decomposition for given animated meshes by analyzing the variation of edge lengths and dihedral angles (LAs) of the meshes. It first computes the length and dihedral angle of each edge for poses and then evaluates the difference (residuals) between the LAs of an arbitrary pose and their counterparts in a reference one. Performing sparse localized decomposition on the residuals yields a set of components which can perfectly capture local motion of articulations. It supports intuitive articulation motion editing through manipulating the blending coefficients of these components. To robustly reconstruct poses from altered LAs, we devise a connection‐map‐based algorithm which consists of two steps of linear optimization. A variety of experiments show that our decomposition is truly localized with respect to rotational motions and outperforms state‐of‐the‐art approaches in precisely capturing local articulated motion.Compactly representing time‐varying geometries is an important issue in dynamic geometry processing. This paper proposes a framework of sparse localized decomposition for given animated meshes by analysing the variation of edge lengths and dihedral angles (LAs) of the meshes. It first computes the length and dihedral angle of each edge for poses and then evaluates the difference (residuals) between the LAs of an arbitrary pose and their counterparts in a reference one. Performing sparse localized decomposition on the residuals yields a set of components which can perfectly capture local motion of articulations.Item Distributed Optimization Framework for Shadow Removal in Multi‐Projection Systems(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Tsukamoto, J.; Iwai, D.; Kashima, K.; Chen, Min and Zhang, Hao (Richard)This paper proposes a novel shadow removal technique for cooperative projection system based on spatiotemporal prediction. In our previous work, we proposed a distributed feedback algorithm, which is implementable in cooperative projection environments subject to data transfer constraints between components. A weakness of this scheme is that the compensation is conducted in each pixel independently. As a result, spatiotemporal information of the environmental change cannot be utilized even if it is available. In view of this, we specifically investigate the situation where some of the projectors are occluded by a moving object whose one‐frame‐ahead behaviour is predictable. In order to remove the resulting shadow, we propose a novel error propagating scheme that is still implementable in a distributed manner and enables us to incorporate the prediction information of the obstacle. It is demonstrated theoretically and experimentally that the proposed method significantly improves the shadow removal performance in comparison to the previous work.This paper proposes a novel shadow removal technique for cooperative projection system based on spatiotemporal prediction. In our previous work, we proposed a distributed feedback algorithm, which is implementable in cooperative projection environments subject to data transfer constraints between components. A weakness of this scheme is that the compensation is conducted in each pixel independently. As a result, spatiotemporal information of the environmental change cannot be utilized even if it is available. In view of this, we specifically investigate the situation where some of projectors are occluded by a moving object whose one‐frame‐ahead behaviour is predictable.Item Tree Branch Level of Detail Models for Forest Navigation(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Zhang, Xiaopeng; Bao, Guanbo; Meng, Weiliang; Jaeger, Marc; Li, Hongjun; Deussen, Oliver; Chen, Baoquan; Chen, Min and Zhang, Hao (Richard)We present a level of detail (LOD) method designed for tree branches. It can be combined with methods for processing tree foliage to facilitate navigation through large virtual forests. Starting from a skeletal representation of a tree, we fit polygon meshes of various densities to the skeleton while the mesh density is adjusted according to the required visual fidelity. For distant models, these branch meshes are gradually replaced with semi‐transparent lines until the tree recedes to a few lines. Construction of these complete LOD models is guided by error metrics to ensure smooth transitions between adjacent LOD models. We then present an instancing technique for discrete LOD branch models, consisting of polygon meshes plus semi‐transparent lines. Line models with different transparencies are instanced on the GPU by merging multiple tree samples into a single model. Our technique reduces the number of draw calls in GPU and increases rendering performance. Our experiments demonstrate that large‐scale forest scenes can be rendered with excellent detail and shadows in real time.We present a level of detail (LOD) method designed for tree branches. It can be combined with methods for processing tree foliage to facilitate navigation through large virtual forests. Starting from a skeletal representation of a tree, we fit polygon meshes of various densities to the skeleton while the mesh density is adjusted according to the required visual fidelity. For distant models, these branch meshes are gradually replaced with semi‐transparent lines until the tree recedes to a few lines. Construction of these complete LOD models is guided by error metrics to ensure smooth transitions between adjacent LOD models. We then present an instancing technique for discrete LOD branch models, consisting of polygon meshes plus semi‐transparent lines.Item Data‐Driven Shape Interpolation and Morphing Editing(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Gao, Lin; Chen, Shu‐Yu; Lai, Yu‐Kun; Xia, Shihong; Chen, Min and Zhang, Hao (Richard)Shape interpolation has many applications in computer graphics such as morphing for computer animation. In this paper, we propose a novel data‐driven mesh interpolation method. We adapt patch‐based linear rotational invariant coordinates to effectively represent deformations of models in a shape collection, and utilize this information to guide the synthesis of interpolated shapes. Unlike previous data‐driven approaches, we use a rotation/translation invariant representation which defines the plausible deformations in a global continuous space. By effectively exploiting the knowledge in the shape space, our method produces realistic interpolation results at interactive rates, outperforming state‐of‐the‐art methods for challenging cases. We further propose a novel approach to interactive editing of shape morphing according to the shape distribution. The user can explore the morphing path and select example models intuitively and adjust the path with simple interactions to edit the morphing sequences. This provides a useful tool to allow users to generate desired morphing with little effort. We demonstrate the effectiveness of our approach using various examples.Shape interpolation has many applications in computer graphics such as morphing for computer animation. In this paper, we propose a novel data‐driven mesh interpolation method. We adapt patch‐based linear rotational invariant coordinates to effectively represent deformations of models in a shape collection, and utilize this information to guide the synthesis of interpolated shapes. Unlike previous data‐driven approaches, we use a rotation/translation invariant representation which defines the plausible deformations in a global continuous space. By effectively exploiting the knowledge in the shape space, our method produces realistic interpolation results at interactive rates, outperforming state‐of‐the‐art methods for challenging cases.Item Geometric Detection Algorithms for Cavities on Protein Surfaces in Molecular Graphics: A Survey(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Simões, Tiago; Lopes, Daniel; Dias, Sérgio; Fernandes, Francisco; Pereira, João; Jorge, Joaquim; Bajaj, Chandrajit; Gomes, Abel; Chen, Min and Zhang, Hao (Richard)Detecting and analysing protein cavities provides significant information about active sites for biological processes (e.g. protein–protein or protein–ligand binding) in molecular graphics and modelling. Using the three‐dimensional (3D) structure of a given protein (i.e. atom types and their locations in 3D) as retrieved from a PDB (Protein Data Bank) file, it is now computationally viable to determine a description of these cavities. Such cavities correspond to pockets, clefts, invaginations, voids, tunnels, channels and grooves on the surface of a given protein. In this work, we survey the literature on protein cavity computation and classify algorithmic approaches into three categories: evolution‐based, energy‐based and geometry‐based. Our survey focuses on geometric algorithms, whose taxonomy is extended to include not only sphere‐, grid‐ and tessellation‐based methods, but also surface‐based, hybrid geometric, consensus and time‐varying methods. Finally, we detail those techniques that have been customized for GPU (graphics processing unit) computing.Detecting and analysing protein cavities provides significant information about active sites for biological processes (e.g. protein–protein or protein–ligand binding) in molecular graphics and modelling. Using the three‐dimensional (3D) structure of a given protein (i.e. atom types and their locations in 3D) as retrieved from a PDB (Protein Data Bank) file, it is now computationally viable to determine a description of these cavities. Such cavities correspond to pockets, clefts, invaginations, voids, tunnels, channels and grooves on the surface of a given protein. In this work, we survey the literature on protein cavity computation and classify algorithmic approaches into three categories: evolution‐based, energy‐based and geometry‐based.Item Category‐Specific Salient View Selection via Deep Convolutional Neural Networks(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Kim, Seong‐heum; Tai, Yu‐Wing; Lee, Joon‐Young; Park, Jaesik; Kweon, In So; Chen, Min and Zhang, Hao (Richard)In this paper, we present a new framework to determine up front orientations and detect salient views of 3D models. The salient viewpoint to human preferences is the most informative projection with correct upright orientation. Our method utilizes two Convolutional Neural Network (CNN) architectures to encode category‐specific information learnt from a large number of 3D shapes and 2D images on the web. Using the first CNN model with 3D voxel data, we generate a CNN shape feature to decide natural upright orientation of 3D objects. Once a 3D model is upright‐aligned, the front projection and salient views are scored by category recognition using the second CNN model. The second CNN is trained over popular photo collections from internet users. In order to model comfortable viewing angles of 3D models, a category‐dependent prior is also learnt from the users. Our approach effectively combines category‐specific scores and classical evaluations to produce a data‐driven viewpoint saliency map. The best viewpoints from the method are quantitatively and qualitatively validated with more than 100 objects from 20 categories. Our thumbnail images of 3D models are the most favoured among those from different approaches.In this paper, we present a new framework to determine up front orientations and detect salient views of 3D models. The salient viewpoint to human preferences is the most informative projection with correct upright orientation. Our method utilizes two Convolutional Neural Network (CNN) architectures to encode category‐specific information learnt from a large number of 3D shapes and 2D images on the web. Using the first CNN model with 3D voxel data, we generate a CNN shape feature to decide natural upright orientation of 3D objects. Once a 3D model is upright‐aligned, the front projection and salient views are scored by category recognition using the second CNN model. The second CNN is trained over popular photo collections from internet users. In order to model comfortable viewing angles of 3D models, a category dependent prior is also learnt from the users. Our approach effectively combines category‐specific scores and classical evaluations to produce a data‐driven viewpoint saliency map. The best viewpoints from the method are quantitatively and qualitatively validated with more than 100 objects from 20 categories. Our thumbnail images of 3D models are the most favored among those from different approaches.Item Deformation Grammars: Hierarchical Constraint Preservation Under Deformation(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Vimont, Ulysse; Rohmer, Damien; Begault, Antoine; Cani, Marie‐Paule; Chen, Min and Zhang, Hao (Richard)Deformation grammars are a novel procedural framework enabling to sculpt hierarchical 3D models in an object‐dependent manner. They process object deformations as symbols thanks to user‐defined interpretation rules. We use them to define hierarchical deformation behaviours tailored for each model, and enabling any sculpting gesture to be interpreted as some adapted constraint‐preserving deformation. A variety of object‐specific constraints can be enforced using this framework, such as maintaining distributions of subparts, avoiding self‐penetrations or meeting semantic‐based user‐defined rules. The operations used to maintain constraints are kept transparent to the user, enabling them to focus on their design. We demonstrate the feasibility and the versatility of this approach on a variety of examples, implemented within an interactive sculpting system.Deformation grammars are a novel procedural framework enabling to sculpt hierarchical 3D models in an object‐dependent manner. They process object deformations as symbols thanks to user‐defined interpretation rules. We use them to define hierarchical deformation behaviours tailored for each model,.Item DYVERSO: A Versatile Multi‐Phase Position‐Based Fluids Solution for VFX(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Alduán, Iván; Tena, Angel; Otaduy, Miguel A.; Chen, Min and Zhang, Hao (Richard)Many impressive fluid simulation methods have been presented in research papers before. These papers typically focus on demonstrating particular innovative features, but they do not meet in a comprehensive manner the production demands of actual VFX pipelines. VFX artists seek methods that are flexible, efficient, robust and scalable, and these goals often conflict with each other. In this paper, we present a multi‐phase particle‐based fluid simulation framework, based on the well‐known Position‐Based Fluids (PBF) method, designed to address VFX production demands. Our simulation framework handles multi‐phase interactions robustly thanks to a modified constraint formulation for density contrast PBF. And, it also supports the interaction of fluids sampled at different resolutions. We put special care on data structure design and implementation details. Our framework highlights cache‐efficient GPU‐friendly data structures, an improved spatial voxelization technique based on Z‐index sorting, tuned‐up simulation algorithms and two‐way‐coupled collision handling based on VDB fields. Altogether, our fluid simulation framework empowers artists with the efficiency, scalability and versatility needed for simulating very diverse scenes and effects.Many impressive fluid simulation methods have been presented in research papers before. These papers typically focus on demonstrating particular innovative features, but they do not meet in a comprehensive manner the production demands of actual VFX pipelines. VFX artists seek methods that are flexible, efficient, robust and scalable, and these goals often conflict with each other. In this paper, we present a multi‐phase particle‐based fluid simulation framework, based on the well‐known Position‐Based Fluids (PBF) method, designed to address VFX production demands.Item Multi‐Variate Gaussian‐Based Inverse Kinematics(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Huang, Jing; Wang, Qi; Fratarcangeli, Marco; Yan, Ke; Pelachaud, Catherine; Chen, Min and Zhang, Hao (Richard)Inverse kinematics (IK) equations are usually solved through approximated linearizations or heuristics. These methods lead to character animations that are unnatural looking or unstable because they do not consider both the motion coherence and limits of human joints. In this paper, we present a method based on the formulation of multi‐variate Gaussian distribution models (MGDMs), which precisely specify the soft joint constraints of a kinematic skeleton. Each distribution model is described by a covariance matrix and a mean vector representing both the joint limits and the coherence of motion of different limbs. The MGDMs are automatically learned from the motion capture data in a fast and unsupervised process. When the character is animated or posed, a Gaussian process synthesizes a new MGDM for each different vector of target positions, and the corresponding objective function is solved with Jacobian‐based IK. This makes our method practical to use and easy to insert into pre‐existing animation pipelines. Compared with previous works, our method is more stable and more precise, while also satisfying the anatomical constraints of human limbs. Our method leads to natural and realistic results without sacrificing real‐time performance.Inverse kinematics (IK) equations are usually solved through approximated linearizations or heuristics. These methods lead to character animations that are unnatural looking or unstable because they do not consider both the motion coherence and limits of human joints. In this paper, we present a method based on the formulation of multi‐variate Gaussian distribution models (MGDMs), which precisely specify the soft joint constraints of a kinematic skeleton. Each distribution model is described by a covariance matrix and a mean vector representing both the joint limits and the coherence of motion of different limbs.Item Ontology‐Based Representation and Modelling of Synthetic 3D Content: A State‐of‐the‐Art Review(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Flotyński, Jakub; Walczak, Krzysztof; Chen, Min and Zhang, Hao (Richard)An indispensable element of any practical 3D/VR/AR application is synthetic three‐dimensional (3D) content. Such content is characterized by a variety of features—geometry, structure, space, appearance, animation and behaviour—which makes the modelling of 3D content a much more complex, difficult and time‐consuming task than in the case of other types of content. One of the promising research directions aiming at simplification of modelling 3D content is the use of the semantic web approach. The formalism provided by semantic web techniques enables declarative knowledge‐based modelling of content based on ontologies. Such modelling can be conducted at different levels of abstraction, possibly domain‐specific, with inherent separation of concerns. The use of semantic web ontologies enables content representation independent of particular presentation platforms and facilitates indexing, searching and analysing content, thus contributing to increased content re‐usability. A range of approaches have been proposed to permit semantic representation and modelling of synthetic 3D content. These approaches differ in the methodologies and technologies used as well as their scope and application domains. This paper provides a review of the current state of the art in representation and modelling of 3D content based on semantic web ontologies, together with a classification, characterization and discussion of the particular approaches.An indispensable element of any practical 3D/VR/AR application is synthetic three‐dimensional (3D) content. Such content is characterized by a variety of features—geometry, structure, space, appearance, animation and behaviour—which makes the modelling of 3D content a much more complex, difficult and time‐consuming task than in the case of other types of content. One of the promising research directions aiming at simplification of modelling 3D content is the use of the semantic web approach.Item Visualization of Biomolecular Structures: State of the Art Revisited(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Kozlíková, B.; Krone, M.; Falk, M.; Lindow, N.; Baaden, M.; Baum, D.; Viola, I.; Parulek, J.; Hege, H.‐C.; Chen, Min and Zhang, Hao (Richard)Structural properties of molecules are of primary concern in many fields. This report provides a comprehensive overview on techniques that have been developed in the fields of molecular graphics and visualization with a focus on applications in structural biology. The field heavily relies on computerized geometric and visual representations of three‐dimensional, complex, large and time‐varying molecular structures. The report presents a taxonomy that demonstrates which areas of molecular visualization have already been extensively investigated and where the field is currently heading. It discusses visualizations for molecular structures, strategies for efficient display regarding image quality and frame rate, covers different aspects of level of detail and reviews visualizations illustrating the dynamic aspects of molecular simulation data. The survey concludes with an outlook on promising and important research topics to foster further success in the development of tools that help to reveal molecular secrets.Structural properties of molecules are of primary concern in many fields. This report provides a comprehensive overview on techniques that have been developed in the fields of molecular graphics and visualization with a focus on applications in structural biology. The field heavily relies on computerized geometric and visual representations of three‐dimensional, complex, large and time‐varying molecular structures. The report presents a taxonomy that demonstrates which areas of molecular visualization have already been extensively investigated and where the field is currently heading. It discusses visualizations for molecular structures, strategies for efficient display regarding image quality and frame rate, covers different aspects of level of detail and reviews visualizations illustrating the dynamic aspects of molecular simulation data.Item The State of the Art in Integrating Machine Learning into Visual Analytics(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Endert, A.; Ribarsky, W.; Turkay, C.; Wong, B.L. William; Nabney, I.; Blanco, I. Díaz; Rossi, F.; Chen, Min and Zhang, Hao (Richard)Visual analytics systems combine machine learning or other analytic techniques with interactive data visualization to promote sensemaking and analytical reasoning. It is through such techniques that people can make sense of large, complex data. While progress has been made, the tactful combination of machine learning and data visualization is still under‐explored. This state‐of‐the‐art report presents a summary of the progress that has been made by highlighting and synthesizing select research advances. Further, it presents opportunities and challenges to enhance the synergy between machine learning and visual analytics for impactful future research directions.Visual analytics systems combine machine learning or other analytic techniques with interactive data visualization to promote sensemaking and analytical reasoning. It is through such techniques that people can make sense of large, complex data. While progress has been made, the tactful combination of machine learning and data visualization is still under‐explored. This state‐of‐the‐art report presents a summary of the progress that has been made by highlighting and synthesizing select research advances. Further, it presents opportunities and challenges to enhance the synergy between machine learning and visual analytics for impactful future research directions.Item Extracting Sharp Features from RGB‐D Images(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Cao, Y‐P.; Ju, T.; Xu, J.; Hu, S‐M.; Chen, Min and Zhang, Hao (Richard)Sharp edges are important shape features and their extraction has been extensively studied both on point clouds and surfaces. We consider the problem of extracting sharp edges from a sparse set of colour‐and‐depth (RGB‐D) images. The noise‐ridden depth measurements are challenging for existing feature extraction methods that work solely in the geometric domain (e.g. points or meshes). By utilizing both colour and depth information, we propose a novel feature extraction method that produces much cleaner and more coherent feature lines. We make two technical contributions. First, we show that intensity edges can augment the depth map to improve normal estimation and feature localization from a single RGB‐D image. Second, we designed a novel algorithm for consolidating feature points obtained from multiple RGB‐D images. By utilizing normals and ridge/valley types associated with the feature points, our algorithm is effective in suppressing noise without smearing nearby features.Sharp edges are important shape features and their extraction has been extensively studied both on point clouds and surfaces. We consider the problem of extracting sharp edges from a sparse set of colour‐and‐depth (RGB‐D) images. The noise‐ridden depth measurements are challenging for existing feature extraction methods that work solely in the geometric domain (e.g. points or meshes). By utilizing both colour and depth information, we propose a novel feature extraction method that produces much cleaner and more coherent feature lines. We make two technical contributions. First, we show that intensity edges can augment the depth map to improve normal estimation and feature localization from a single RGB‐D image. Second, we designed a novel algorithm for consolidating feature points obtained from multiple RGB‐D images. By utilizing normals and ridge/valley types associated with the feature points, our algorithm is effective in suppressing noise without smearing nearby features.Item Group Modeling: A Unified Velocity‐Based Approach(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Ren, Z.; Charalambous, P.; Bruneau, J.; Peng, Q.; Pettré, J.; Chen, Min and Zhang, Hao (Richard)Crowd simulators are commonly used to populate movie or game scenes in the entertainment industry. Even though it is crucial to consider the presence of groups for the believability of a virtual crowd, most crowd simulations only take into account individual characters or a limited set of group behaviors. We introduce a unified solution that allows for simulations of crowds that have diverse group properties such as social groups, marches, tourists and guides, etc. We extend the Velocity Obstacle approach for agent‐based crowd simulations by introducing Velocity Connection; the set of velocities that keep agents moving together while avoiding collisions and achieving goals. We demonstrate our approach to be robust, controllable, and able to cover a large set of group behaviors.Crowd simulators are commonly used to populate movie or game scenes in the entertainment industry. Even though it is crucial to consider the presence of groups for the believability of a virtual crowd, most crowd simulations only take into account individual characters or a limited set of group behaviors. We introduce a unified solution that allows for simulations of crowds that have diverse group properties such as social groups, marches, tourists and guides, etc. We extend the Velocity Obstacle approach for agent‐based crowd simulations by introducing Velocity Connection; the set of velocities that keep agents moving together while avoiding collisions and achieving goals. We demonstrate our approach to be robust, controllable, and able to cover a large set of group behaviors.Item Convolutional Sparse Coding for Capturing High‐Speed Video Content(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Serrano, Ana; Garces, Elena; Masia, Belen; Gutierrez, Diego; Chen, Min and Zhang, Hao (Richard)Video capture is limited by the trade‐off between spatial and temporal resolution: when capturing videos of high temporal resolution, the spatial resolution decreases due to bandwidth limitations in the capture system. Achieving both high spatial temporal resolution is only possible with highly specialized and very expensive hardware, and even then the same basic trade‐off remains. The recent introduction of compressive sensing and sparse reconstruction techniques allows for the capture of high‐speed video, by coding the temporal information in a single frame, and then reconstructing the full video sequence from this single‐coded image and a trained dictionary of image patches. In this paper, we first analyse this approach, and find insights that help improve the quality of the reconstructed videos. We then introduce a novel technique, based on (CSC), and show how it outperforms the state‐of‐the‐art, patch‐based approach in terms of flexibility and efficiency, due to the convolutional nature of its filter banks. The key idea for CSC high‐speed video acquisition is extending the basic formulation by imposing an additional constraint in the temporal dimension, which enforces sparsity of the first‐order derivatives over time.Video capture is limited by the trade‐off between spatial and temporal resolution: when capturing videos of high temporal resolution, the spatial resolution decreases due to bandwidth limitations in the capture system. Achieving both high spatial and temporal resolution is only possible with highly specialized and very expensive hardware, and even then the same basic trade‐off remains. .Item A Bi‐Directional Procedural Model for Architectural Design(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Hua, H.; Chen, Min and Zhang, Hao (Richard)It is a challenge for shape grammars to incorporate spatial hierarchy and interior connectivity of buildings in early design stages. To resolve this difficulty, we developed a bi‐directional procedural model: the forward process constructs the derivation tree with production rules, while the backward process realizes the tree with shapes in a stepwise manner (from leaves to the root). Each inverse‐derivation step involves essential geometric‐topological reasoning. With this bi‐directional framework, design constraints and objectives are encoded in the grammar‐shape translation. We conducted two applications. The first employs geometric primitives as terminals and the other uses previous designs as terminals. Both approaches lead to consistent interior connectivity and a rich spatial hierarchy. The results imply that bespoke geometric‐topological processing helps shape grammar to create plausible, novel compositions. Our model is more productive than hand‐coded shape grammars, while it is less computation‐intensive than evolutionary treatment of shape grammars.It is a challenge for shape grammars to incorporate spatial hierarchy and interior connectivity of buildings in early design stages. To resolve this difficulty, we developed a bi‐directional procedural model: the forward process constructs the derivation tree with production rules, while the backward process realizes the tree with shapes in a stepwise manner (from leaves to the root). Each inverse‐derivation step involves essential geometric‐topological reasoning. With this bi‐directional framework, design constraints and objectives are encoded in the grammar‐shape translation.
- «
- 1 (current)
- 2
- 3
- »