35-Issue 1
Permanent URI for this collection
Browse
Browsing 35-Issue 1 by Title
Now showing 1 - 20 of 24
Results Per Page
Sort Options
Item Anisotropic Strain Limiting for Quadrilateral and Triangular Cloth Meshes(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Ma, Guanghui; Ye, Juntao; Li, Jituo; Zhang, Xiaopeng; Chen, Min and Zhang, Hao (Richard)The cloth simulation systems often suffer from excessive extension on the polygonal mesh, so an additional strain‐limiting process is typically used as a remedy in the simulation pipeline. A cloth model can be discretized as either a quadrilateral mesh or a triangular mesh, and their strains are measured differently. The edge‐based strain‐limiting method for a quadrilateral mesh creates anisotropic behaviour by nature, as discretization usually aligns the edges along the warp and weft directions. We improve this anisotropic technique by replacing the traditionally used equality constraints with inequality ones in the mathematical optimization, and achieve faster convergence. For a triangular mesh, the state‐of‐the‐art technique measures and constrains the strains along the two principal (and constantly changing) directions in a triangle, resulting in an isotropic behaviour which prohibits shearing. Based on the framework of inequality‐constrained optimization, we propose a warp and weft strain‐limiting formulation. This anisotropic model is more appropriate for textile materials that do not exhibit isotropic strain behaviour.The cloth simulation systems often suffer from excessive extension on the polygonal mesh, so an additional strain‐limiting process is typically used as a remedy in the simulation pipeline. A cloth model can be discretized as either a quadrilateral mesh or a triangular mesh, and their strains are measured differently. The edge‐based strain‐limiting method for a quadrilateral mesh creates anisotropic behaviour by nature, as discretization usually aligns the edges along the warp and weft directions.We improve this anisotropic technique by replacing the traditionally used equality constraints with inequality ones in the mathematical optimization, and achieve faster convergence. For a triangular mesh, the state‐of‐the‐art technique measures and constrains the strains along the two principal (and constantly changing) directions in a triangle, resulting in an isotropic behaviour which prohibits shearing. Based on the framework of inequality‐constrained optimization, we propose a warp and weft strain‐limiting formulation. This anisotropic model is more appropriate for textile materials that do not exhibit isotropic strain behaviour.Item Autocorrelation Descriptor for Efficient Co‐Alignment of 3D Shape Collections(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Averkiou, Melinos; Kim, Vladimir G.; Mitra, Niloy J.; Chen, Min and Zhang, Hao (Richard)Co‐aligning a collection of shapes to a consistent pose is a common problem in shape analysis with applications in shape matching, retrieval and visualization. We observe that resolving among some orientations is easier than Others, for example, a common mistake for bicycles is to align front‐to‐back, while even the simplest algorithm would not erroneously pick orthogonal alignment. The key idea of our work is to analyse rotational autocorrelations of shapes to facilitate shape co‐alignment. In particular, we use such an autocorrelation measure of individual shapes to decide which shape pairs might have well‐matching orientations; and, if so, which configurations are likely to produce better alignments. This significantly prunes the number of alignments to be examined, and leads to an efficient, scalable algorithm that performs comparably to state‐of‐the‐art techniques on benchmark data sets, but requires significantly fewer computations, resulting in 2–16× speed improvement in our tests.Co‐aligning a collection of shapes to a consistent pose is a common problem in shape analysis with applications in shape matching, retrieval and visualization. We observe that resolving among some orientations is easier than Others, for example, a common mistake for bicycles is to align front‐to‐back, while even the simplest algorithm would not erroneously pick orthogonal alignment. The key idea of our work is to analyse rotational autocorrelations of shapes to facilitate shape co‐alignment. In particular, we use such an autocorrelation measure of individual shapes to decide which shape pairs might have well‐matching orientations; and, if so, which configurations are likely to produce better alignments. This significantly prunes the number of alignments to be examined, and leads to an efficient, scalable algorithm that performs comparably to state‐of‐the‐art techniques on benchmark data sets, but requires significantly fewer computations, resulting in 2‐16x speed improvement in our tests.Item Colour Mapping: A Review of Recent Methods, Extensions and Applications(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Faridul, H. Sheikh; Pouli, T.; Chamaret, C.; Stauder, J.; Reinhard, E.; Kuzovkin, D.; Tremeau, A.; Chen, Min and Zhang, Hao (Richard)The objective of colour mapping or colour transfer methods is to recolour a given image or video by deriving a mapping between that image and anOther image serving as a reference. These methods have received considerable attention in recent years, both in academic literature and industrial applications. Methods for recolouring images have often appeared under the labels of colour correction, colour transfer or colour balancing, to name a few, but their goal is always the same: mapping the colours of one image to anOther. In this paper, we present a comprehensive overview of these methods and offer a classification of current solutions depending not only on their algorithmic formulation but also their range of applications. We also provide a new dataset and a novel evaluation technique called ‘evaluation by colour mapping roundtrip’. We discuss the relative merit of each class of techniques through examples and show how colour mapping solutions can have been applied to a diverse range of problems.The objective of colour mapping or colour transfer methods is to recolour a given image or video by deriving a mapping between that image and anOther image serving as a reference. These methods have received considerable attention in recent years, both in academic literature and industrial applications. Methods for recolouring images have often appeared under the labels of colour correction, colour transfer or colour balancing, to name a few, but their goal is always the same: mapping the colours of one image to anOther. In this paper, we present a comprehensive overview of these methods and offer a classification of current solutions depending not only on their algorithmic formulation but also their range of applications.Item Continuity and Interpolation Techniques for Computer Graphics(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Gonzalez, F.; Patow, G.; Chen, Min and Zhang, Hao (Richard)Continuity and interpolation have been crucial topics for computer graphics since its very beginnings. Every time we want to interpolate values across some area, we need to take a set of samples over that interpolating region. However, interpolating samples faithfully allowing the results to closely match the underlying functions can be a tricky task as the functions to sample could not be smooth and, in the worst case, it could be even impossible when they are not continuous. In those situations bringing the required continuity is not an easy task, and much work has been done to solve this problem. In this paper, we focus on the state of the art in continuity and interpolation in three stages of the real‐time rendering pipeline. We study these problems and their current solutions in texture space (2D), object space (3D) and screen space. With this review of the literature in these areas, we hope to bring new light and foster research in these fundamental, yet not completely solved problems in computer graphics.Continuity and interpolation have been crucial topics for computer graphics since its very beginnings. Every time we want to interpolate values across some area, we need to take a set of samples over that interpolating region. However, interpolating samples faithfully allowing the results to closely match the underlying functions can be a tricky task as the functions to sample could not be smooth and, in the worst case, it could be even impossible when they are not continuous. In those situations bringing the required continuity is not an easy task, and much work has been done to solve this problem. In this paper, we focus on the state of the art in continuity and interpolation in three stages of the real‐time rendering pipeline.Item Editorial(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Min; , Richard; Chen, Min and Zhang, Hao (Richard)Item Environmental Objects for Authoring Procedural Scenes(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Grosbellet, Francois; Peytavie, Adrien; Guérin, Éric; Galin, Éric; Mérillou, Stéphane; Benes, Bedrich; Chen, Min and Zhang, Hao (Richard)We propose a novel approach for authoring large scenes with automatic enhancement of objects to create geometric decoration details such as snow cover, icicles, fallen leaves, grass tufts or even trash. We introduce environmental objects that extend an input object geometry with a set of procedural effects that defines how the object reacts to the environment, and by a set of scalar fields that defines the influence of the object over of the environment. The user controls the scene by modifying environmental variables, such as temperature or humidity fields. The scene definition is hierarchical: objects can be grouped and their behaviours can be set at each level of the hierarchy. Our per object definition allows us to optimize and accelerate the effects computation, which also enables us to generate large scenes with many geometric details at a very high level of detail. In our implementation, a complex urban scene of 10 000 m, represented with details of less than 1 cm, can be locally modified and entirely regenerated in a few seconds.We propose a novel approach for authoring large scenes with automatic enhancement of objects to create geometric decoration details such as snow cover, icicles, fallen leaves, grass tufts or even trash. We introduce environmental objects that extend an input object geometry with a set of procedural effects that defines how the object reacts to the environment, and by a set of scalar fields that defines the influence of the object over of the environment. The user controls the scene by modifying environmental variables, such as temperature or humidity fields.Item Fast ANN for High‐Quality Collaborative Filtering(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Tsai, Yun‐Ta; Steinberger, Markus; Pająk, Dawid; Pulli, Kari; Chen, Min and Zhang, Hao (Richard)Collaborative filtering collects similar patches, jointly filters them and scatters the output back to input patches; each pixel gets a contribution from each patch that overlaps with it, allowing signal reconstruction from highly corrupted data. Exploiting self‐similarity, however, requires finding matching image patches, which is an expensive operation. We propose a GPU‐friendly approximated‐nearest‐neighbour(ANN) algorithm that produces high‐quality results for any type of collaborative filter. We evaluate our ANN search against state‐of‐the‐art ANN algorithms in several application domains. Our method is orders of magnitudes faster, yet provides similar or higher quality results than the previous work.Collaborative filtering is a powerful, yet computationally demanding denoising approach. (a) Relying on self‐similarity in the input data, collaborative filtering requires the search for patches which are similar to a reference patch (red). Filtering the patches, either by averaging the pixels or modifying the coefficients after a wavelet or Other transformation, removes unwanted noise, and each output pixel is collaboratively filtered using all the denoised image patches that overlap the pixel. Our method accelerates the process of searching for similar patches and facilitates high‐quality collaborative filtering even on mobile devices. Application examples for collaborative filtering include (left: our output; right: noisy input) (b) denoising an image burst, (c) filtering the samples for global illumination and (d) geometry reconstruction.Item Full 3D Plant Reconstruction via Intrusive Acquisition(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Yin, Kangxue; Huang, Hui; Long, Pinxin; Gaissinski, Alexei; Gong, Minglun; Sharf, Andrei; Chen, Min and Zhang, Hao (Richard)Digitally capturing vegetation using off‐the‐shelf scanners is a challenging problem. Plants typically exhibit large self‐occlusions and thin structures which cannot be properly scanned. Furthermore, plants are essentially dynamic, deforming over the time, which yield additional difficulties in the scanning process. In this paper, we present a novel technique for acquiring and modelling of plants and foliage. At the core of our method is an intrusive acquisition approach, which disassembles the plant into disjoint parts that can be accurately scanned and reconstructed offline. We use the reconstructed part meshes as 3D proxies for the reconstruction of the complete plant and devise a global‐to‐local non‐rigid registration technique that preserves specific plant characteristics. Our method is tested on plants of various styles, appearances and characteristics. Results show successful reconstructions with high accuracy with respect to the acquired data.Digitally capturing vegetation using off‐the‐shelf scanners is a challenging problem. Plants typically exhibit large self‐occlusions and thin structures which cannot be properly scanned. Furthermore, plants are essentially dynamic, deforming over the time, which yield additional difficulties in the scanning process. In this paper, we present a novel technique for acquiring and modelling of plants and foliage. At the core of our method is an intrusive acquisition approach, which disassembles the plant into disjoint parts that can be accurately scanned and reconstructed offline.Item Graph‐Based Wavelet Representation of Multi‐Variate Terrain Data(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Cioaca, Teodor; Dumitrescu, Bogdan; Stupariu, Mihai‐Sorin; Chen, Min and Zhang, Hao (Richard)Terrain data can be processed from the double perspective of computer graphics and graph theory. We propose a hybrid method that uses geometrical and vertex attribute information to construct a weighted graph reflecting the variability of the vertex data. As a planar graph, a generic terrain data set is subjected to a geometry‐sensitive vertex partitioning procedure. Through the use of a combined, thin‐plate energy and multi‐dimensional quadric metric error, feature estimation heuristic, we construct ‘even’ and ‘odd’ node subsets. Using an invertible lifting scheme, adapted from generic weighted graphs, detail vectors are extracted and used to recover or filter the node information. The design of the prediction and update filters improves the root mean squared error of the signal over general graph‐based approaches. As a key property of this design, preserving the mean of the graph signal becomes essential for decreasing the error measure and conserving the salient shape features.Terrain data can be processed from the double perspective of computer graphics and graph theory. We propose a hybrid method that uses geometrical and vertex attribute information to construct a weighted graph reflecting the variability of the vertex data. As a planar graph, a generic terrain data set is subjected to a geometry‐sensitive vertex partitioning procedure. Through the use of a combined, thin‐plate energy and multi‐dimensional quadric metric error, feature estimation heuristic, we construct ‘even’ and ‘odd’ node subsets. A critically‐sampled lifting scheme design, adapted from generic weighted graphs, is employed to downsample the input. The resulting detail vectors are stored for use in synthesis or filtering applications.Item A Hierarchical Approach for Regular Centroidal Voronoi Tessellations(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Wang, L.; Hétroy‐Wheeler, F.; Boyer, E.; Chen, Min and Zhang, Hao (Richard)In this paper, we consider Centroidal Voronoi Tessellations (CVTs) and study their regularity. CVTs are geometric structures that enable regular tessellations of geometric objects and are widely used in shape modelling and analysis. While several efficient iterative schemes, with defined local convergence properties, have been proposed to compute CVTs, little attention has been paid to the evaluation of the resulting cell decompositions. In this paper, we propose a regularity criterion that allows us to evaluate and compare CVTs independently of their sizes and of their cell numbers. This criterion allows us to compare CVTs on a common basis. It builds on earlier theoretical work showing that second moments of cells converge to a lower bound when optimizing CVTs. In addition to proposing a regularity criterion, this paper also considers computational strategies to determine regular CVTs. We introduce a hierarchical framework that propagates regularity over decomposition levels and hence provides CVTs with provably better regularities than existing methods. We illustrate these principles with a wide range of experiments on synthetic and real models.In this paper, we consider Centroidal Voronoi Tessellations (CVTs) and study their regularity. CVTs are geometric structures that enable regular tessellations of geometric objects and are widely used in shape modelling and analysis.While several efficient iterative schemes, with defined local convergence properties, have been proposed to compute CVTs, little attention has been paid to the evaluation of the resulting cell decompositions. In this paper, we propose a regularity criterion that allows us to evaluate and compare CVTs independently of their sizes and of their cell numbers.Item Issue Information(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Min and Zhang, Hao (Richard)Item Issue Information ‐ TOC(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Min and Zhang, Hao (Richard)Item Lauren(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Min and Zhang, Hao (Richard)Item Mesh Sequence Morphing(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Xue; Feng, Jieqing; Bechmann, Dominique; Chen, Min and Zhang, Hao (Richard)Morphing is an important technique for the generation of special effects in computer animation. However, an analogous technique has not yet been applied to the increasingly prevalent animation representation, i.e. 3D mesh sequences. In this paper, a technique for morphing between two mesh sequences is proposed to simultaneously blend motions and interpolate shapes. Based on all possible combinations of the motions and geometries, a universal framework is proposed to recreate various plausible mesh sequences. To enable a universal framework, we design a skeleton‐driven cage‐based deformation transfer scheme which can account for motion blending and geometry interpolation. To establish one‐to‐one correspondence for interpolating between two mesh sequences, a hybrid cross‐parameterization scheme that fully utilizes the skeleton‐driven cage control structure and adapts user‐specified joint‐like markers, is introduced. The experimental results demonstrate that the framework, not only accomplishes mesh sequence morphing, but also is suitable for a wide range of applications such as deformation transfer, motion blending or transition and dynamic shape interpolation.Morphing is an important technique for the generation of special effects in computer animation. However, an analogous technique has not yet been applied to the increasingly prevalent animation representation, i.e. 3D mesh sequences. In this paper, a technique for morphing between two mesh sequences is proposed to simultaneously blend motions and interpolate shapes. Based on all possible combinations of the motions and geometries, a universal framework is proposed to recreate various plausible mesh sequences. To enable a universal framework, we design a skeleton‐driven cage‐based deformation transfer scheme which can account for motion blending and geometry interpolation.Item Mobile Surface Reflectometry(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Riviere, J.; Peers, P.; Ghosh, A.; Chen, Min and Zhang, Hao (Richard)We present two novel mobile reflectometry approaches for acquiring detailed spatially varying isotropic surface reflectance and mesostructure of a planar material sample using commodity mobile devices. The first approach relies on the integrated camera and flash pair present on typical mobile devices to support free‐form handheld acquisition of spatially varying rough specular material samples. The second approach, suited for highly specular samples, uses the LCD panel to illuminate the sample with polarized second‐order gradient illumination. To address the limited overlap of the front facing camera's view and the LCD illumination (and thus limited sample size), we propose a novel appearance transfer method that combines controlled reflectance measurement of a small exemplar section with uncontrolled reflectance measurements of the full sample under natural lighting. Finally, we introduce a novel surface detail enhancement method that adds fine scale surface mesostructure from close‐up observations under uncontrolled natural lighting. We demonstrate the accuracy and versatility of the proposed mobile reflectometry methods on a wide variety of spatially varying materials.We present two novel mobile reflectometry approaches for acquiring detailed spatially varying isotropic surface reflectance and mesostructure of a planar material sample using commodity mobile devices. The first approach relies on the integrated camera and flash pair present on typical mobile devices to support free‐form handheld acquisition of spatially varying rough specular material samples. The second approach, suited for highly specular samples, uses the LCD panel to illuminate the sample with polarized second‐order gradient illumination. To address the limited overlap of the front facing camera's view and the LCD illumination (and thus limited sample size), we propose a novel appearance transfer method that combines controlled reflectance measurement of a small exemplar section with uncontrolled reflectance measurements of the full sample under natural lighting. Finally, we introduce a novel surface detail enhancement method that adds fine scale surface mesostructure from close‐up observations under uncontrolled natural lighting. We demonstrate the accuracy and versatility of the proposed mobile reflectometry methods on a wide variety of spatially varying materials.Item Planar Shape Detection and Regularization in Tandem(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Oesau, Sven; Lafarge, Florent; Alliez, Pierre; Chen, Min and Zhang, Hao (Richard)We present a method for planar shape detection and regularization from raw point sets. The geometric modelling and processing of man‐made environments from measurement data often relies upon robust detection of planar primitive shapes. In addition, the detection and reinforcement of regularities between planar parts is a means to increase resilience to missing or defect‐laden data as well as to reduce the complexity of models and algorithms down the modelling pipeline. The main novelty behind our method is to perform detection and regularization in tandem. We first sample a sparse set of seeds uniformly on the input point set, and then perform in parallel shape detection through region growing, interleaved with regularization through detection and reinforcement of regular relationships (coplanar, parallel and orthogonal). In addition to addressing the end goal of regularization, such reinforcement also improves data fitting and provides guidance for clustering small parts into larger planar parts. We evaluate our approach against a wide range of inputs and under four criteria: geometric fidelity, coverage, regularity and running times. Our approach compares well with available implementations such as the efficient random sample consensus–based approach proposed by Schnabel and co‐authors in 2007.We present a method for planar shape detection and regularization from raw point sets. The geometric modelling and processing of man‐made environments from measurement data often relies upon robust detection of planar primitive shapes. In addition, the detection and reinforcement of regularities between planar parts is a means to increase resilience to missing or defect‐laden data as well as to reduce the complexity of models and algorithms down the modelling pipeline. The main novelty behind our method is to perform detection and regularization in tandem. We first sample a sparse set of seeds uniformly on the input point set, and then perform in parallel shape detection through region growing, interleaved with regularization through detection and reinforcement of regular relationships (coplanar, parallel and orthogonal).Item Practical Low‐Cost Recovery of Spectral Power Distributions(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Alvarez‐Cortes, Sara; Kunkel, Timo; Masia, Belen; Chen, Min and Zhang, Hao (Richard)Measuring the spectral power distribution of a light source, that is, the emission as a function of wavelength, typically requires the use of spectrophotometers or multi‐spectral cameras. Here, we propose a low‐cost system that enables the recovery of the visible light spectral signature of different types of light sources without requiring highly complex or specialized equipment and using just off‐the‐shelf, widely available components. To do this, a standard Digital Single‐Lens Reflex (DSLR) camera and a diffraction filter are used, sacrificing the spatial dimension for spectral resolution. We present here the image formation model and the calibration process necessary to recover the spectrum, including spectral calibration and amplitude recovery. We also assess the robustness of our method and perform a detailed analysis exploring the parameters influencing its accuracy. Further, we show applications of the system in image processing and rendering.Measuring the spectral power distribution of a light source, that is, the emission as a function of wavelength, typically requires the use of spectrophotometers or multi‐spectral cameras. Here, we propose a low‐cost system that enables the recovery of the visible light spectral signature of different types of light sources without requiring highly complex or specialized equipment and using just off‐the‐shelf, widely available components. To do this, a standard DSLR camera and a diffraction filter are used, sacrificing the spatial dimension for spectral resolution. We present here the image formation model and the calibration process necessary to recover the spectrum, including spectral calibration and amplitude recovery.Item Projective Blue‐Noise Sampling(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Reinert, Bernhard; Ritschel, Tobias; Seidel, Hans‐Peter; Georgiev, Iliyan; Chen, Min and Zhang, Hao (Richard)We propose projective blue‐noise patterns that retain their blue‐noise characteristics when undergoing one or multiple projections onto lower dimensional subspaces. These patterns are produced by extending existing methods, such as dart throwing and Lloyd relaxation, and have a range of applications. For numerical integration, our patterns often outperform state‐of‐the‐art stochastic and low‐discrepancy patterns, which have been specifically designed only for this purpose. For image reconstruction, our method outperforms traditional blue‐noise sampling when the variation in the signal is concentrated along one dimension. Finally, we use our patterns to distribute primitives uniformly in 3D space such that their 2D projections retain a blue‐noise distribution.We propose projective blue‐noise patterns that retain their blue‐noise characteristics when undergoing one or multiple projections onto lower dimensional subspaces. These patterns are produced by extending existing methods, such as dart throwing and Lloyd relaxation, and have a range of applications. For numerical integration, our patterns often outperform state‐of‐the‐art stochastic and low‐discrepancy patterns, which have been specifically designed only for this purpose. For image reconstruction, our method outperforms traditional blue‐noise sampling when the variation in the signal is concentrated along one dimension. Finally, we use our patterns to distribute primitives uniformly in 3D space such that their 2D projections retain a blue‐noise distribution.Item Real‐Time Rendering Techniques with Hardware Tessellation(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Nießner, M.; Keinert, B.; Fisher, M.; Stamminger, M.; Loop, C.; Schäfer, H.; Chen, Min and Zhang, Hao (Richard)Graphics hardware has progressively been optimized to render more triangles with increasingly flexible shading. For highly detailed geometry, interactive applications restricted themselves to performing transforms on fixed geometry, since they could not incur the cost required to generate and transfer smooth or displaced geometry to the GPU at render time. As a result of recent advances in graphics hardware, in particular the GPU tessellation unit, complex geometry can now be generated on the fly within the GPU's rendering pipeline. This has enabled the generation and displacement of smooth parametric surfaces in real‐time applications. However, many well‐established approaches in offline rendering are not directly transferable due to the limited tessellation patterns or the parallel execution model of the tessellation stage. In this survey, we provide an overview of recent work and challenges in this topic by summarizing, discussing, and comparing methods for the rendering of smooth and highly detailed surfaces in real time.Graphics hardware has progressively been optimized to render more triangles with increasingly flexible shading. For highly detailed geometry, interactive applications restricted themselves to performing transforms on fixed geometry, since they could not incur the cost required to generate and transfer smooth or displaced geometry to the GPU at render time. As a result of recent advances in graphics hardware, in particular the GPU tessellation unit, complex geometry can now be generated on the fly within the GPU's rendering pipeline. This has enabled the generation and displacement of smooth parametric surfaces in real‐time applications. However, many well‐established approaches in offline rendering are not directly transferable due to the limited tessellation patterns or the parallel execution model of the tessellation stage.Item Robust Cardiac Function Assessment in 4D PC‐MRI Data of the Aorta and Pulmonary Artery(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Köhler, Benjamin; Preim, Uta; Grothoff, Matthias; Gutberlet, Matthias; Fischbach, Katharina; Preim, Bernhard; Chen, Min and Zhang, Hao (Richard)Four‐dimensional phase‐contrast magnetic resonance imaging (4D PC‐MRI) allows the non‐invasive acquisition of time‐resolved, 3D blood flow information. Stroke volumes (SVs) and regurgitation fractions (RFs) are two of the main measures to assess the cardiac function and severity of valvular pathologies. The flow rates in forward and backward direction through a plane above the aortic or pulmonary valve are required for their quantification. Unfortunately, the calculations are highly sensitive towards the plane's angulation since orthogonally passing flow is considered. This often leads to physiologically implausible results. In this work, a robust quantification method is introduced to overcome this problem. Collaborating radiologists and cardiologists were carefully observed while estimating SVs and RFs in various healthy volunteer and patient 4D PC‐MRI data sets with conventional quantification methods, that is, using a single plane above the valve that is freely movable along the centerline. By default it is aligned perpendicular to the vessel's centerline, but free angulation (rotation) is possible. This facilitated the automation of their approach which, in turn, allows to derive statistical information about the plane angulation sensitivity. Moreover, the experts expect a continuous decrease of the blood flow volume along the vessel course. Conventional methods are often unable to produce this behaviour. Thus, we present a procedure to fit a monotonous function that ensures such physiologically plausible results. In addition, this technique was adapted for the usage in branching vessels such as the pulmonary artery. The performed informal evaluation shows the capability of our method to support diagnosis; a parameter evaluation confirms the robustness. Vortex flow was identified as one of the main causes for quantification uncertainties.Four‐dimensional phase‐contrast magnetic resonance imaging (4D PC‐MRI) allows the non‐invasive acquisition of time‐resolved, 3D blood flow information. Stroke volumes (SVs) and regurgitation fractions (RFs) are two of the main measures to assess the cardiac function and severity of valvular pathologies. The flow rates in forward and backward direction through a plane above the aortic or pulmonary valve are required for their quantification. Unfortunately, the calculations are highly sensitive towards the plane's angulation since orthogonally passing flow is considered.