33-Issue 1
Permanent URI for this collection
Browse
Browsing 33-Issue 1 by Issue Date
Now showing 1 - 20 of 24
Results Per Page
Sort Options
Item Controlled Metamorphosis Between Skeleton‐Driven Animated Polyhedral Meshes of Arbitrary Topologies(The Eurographics Association and John Wiley and Sons Ltd., 2014) Kravtsov, Denis; Fryazinov, Oleg; Adzhiev, Valery; Pasko, Alexander; Comninos, Peter; Holly Rushmeier and Oliver DeussenEnabling animators to smoothly transform between animated meshes of differing topologies is a long‐standing problem in geometric modelling and computer animation. In this paper, we propose a new hybrid approach built upon the advantages of scalar field‐based models (often called implicit surfaces) which can easily change their topology by changing their defining scalar field. Given two meshes, animated by their rigging‐skeletons, we associate each mesh with its own approximating implicit surface. This implicit surface moves synchronously with the mesh. The shape‐metamorphosis process is performed in several steps: first, we collapse the two meshes to their corresponding approximating implicit surfaces, then we transform between the two implicit surfaces and finally we inverse transition from the resulting metamorphosed implicit surface to the target mesh. The examples presented in this paper demonstrating the results of the proposed technique were implemented using an in‐house plug‐in for Maya™.Enabling animators to smoothly transform between animated meshes of differing topologies is a long‐standing problem in geometric modelling and computer animation. In this paper, we propose a new hybrid approach built upon the advantages of scalar field‐based models (often called implicit surfaces) which can easily change their topology by changing their defining scalar field. Given two meshes, animated by their rigging‐skeletons, we associate each mesh with its own approximating implicit surface. This implicit surface moves synchronously with the mesh. The shape‐metamorphosis process is performed in several steps: first, we collapse the two meshes to their corresponding approximating implicit surfaces, then we transform between the two implicit surfaces.Item Time Line Cell Tracking for the Approximation of Lagrangian Coherent Structures with Subgrid Accuracy(The Eurographics Association and John Wiley and Sons Ltd., 2014) Kuhn, A.; Engelke, W.; Rössl, C.; Hadwiger, M.; Theisel, H.; Holly Rushmeier and Oliver DeussenLagrangian coherent structures (LCSs) have become a widespread and powerful method to describe dynamic motion patterns in time-dependent flow fields. The standard way to extract LCS is to compute height ridges in the finite-time Lyapunov exponent field. In this work, we present an alternative method to approximate Lagrangian features for 2D unsteady flow fields that achieve subgrid accuracy without additional particle sampling. We obtain this by a geometric reconstruction of the flow map using additional material constraints for the available samples. In comparison to the standard method, this allows for a more accurate global approximation of LCS on sparse grids and for long integration intervals. The proposed algorithm works directly on a set of given particle trajectories and without additional flow map derivatives. We demonstrate its application for a set of computational fluid dynamic examples, as well as trajectories acquired by Lagrangian methods, and discuss its benefits and limitations.Lagrangian Coherent Structures (LCS) have become a widespread and powerful method to describe dynamic motion patterns in time-dependent flow fields. The standard way to extract LCS is to compute height ridges in the Finite Time Lyapunov Exponent (FTLE) field. In this work, we present an alternative method to approximate Lagrangian features for 2D unsteady flow fields that achieves subgrid accuracy without additional particle sampling. We obtain this by a geometric reconstruction of the flow map using additional material constraints for the available samples. The illustration shows four approximations of LCS at different time steps in subgrid accuracy computed from a triangular grid containing 60 times 120 sample points for a heated cylinder simulation.Item Occluder Simplification Using Planar Sections(The Eurographics Association and John Wiley and Sons Ltd., 2014) Silvennoinen, Ari; Saransaari, Hannu; Laine, Samuli; Lehtinen, Jaakko; Holly Rushmeier and Oliver DeussenWe present a method for extreme occluder simplification. We take a triangle soup as input, and produce a small set of polygons with closely matching occlusion properties. In contrast to methods that optimize the original geometry, our algorithm has very few requirements for the input— specifically, the input does not need to be a watertight, two‐manifold mesh. This robustness is achieved by working on a well‐behaved, discretized representation of the input instead of the original, potentially badly structured geometry. We first formulate the algorithm for individual occluders, and further introduce a hierarchy for handling large, complex scenes.We present a method for extreme occluder simplification. We take a triangle soup as input, and produce a small set of polygons with closely matching occlusion properties. In contrast to methods that optimize the original geometry, our algorithm has very few requirements for the input— specifically, the input does not need to be a watertight, two‐manifold mesh. This robustness is achieved by working on a well‐behaved, discretized representation of the input instead of the original, potentially badly structured geometry. We first formulate the algorithm for individual occluders, and further introduce a hierarchy for handling large, complex scenes.Item Stackless Multi‐BVH Traversal for CPU, MIC and GPU Ray Tracing(The Eurographics Association and John Wiley and Sons Ltd., 2014) Áfra, Attila T.; Szirmay‐Kalos, László; Holly Rushmeier and Oliver DeussenStackless traversal algorithms for ray tracing acceleration structures require significantly less storage per ray than ordinary stack‐based ones. This advantage is important for massively parallel rendering methods, where there are many rays in flight. On SIMD architectures, a commonly used acceleration structure is the MBVH, which has multiple bounding boxes per node for improved parallelism. It scales to branching factors higher than two, for which, however, only stack‐based traversal methods have been proposed so far. In this paper, we introduce a novel stackless traversal algorithm for MBVHs with up to four‐way branching. Our approach replaces the stack with a small bitmask, supports dynamic ordered traversal, and has a low computation overhead. We also present efficient implementation techniques for recent CPU, MIC (Intel Xeon Phi) and GPU (NVIDIA Kepler) architectures.Stackless traversal algorithms for ray tracing acceleration structures require significantly less storage per ray than ordinary stack‐based ones. This advantage is important for massively parallel rendering methods, where there are many rays in flight. On SIMD architectures, a commonly used acceleration structure is the multi bounding volume hierarchy (MBVH), which has multiple bounding boxes per node for improved parallelism. It scales to branching factors higher than two, for which, however, only stack‐based traversal methods have been proposed so far. In this paper, we introduce a novel stackless traversal algorithm for MBVHs with up to 4‐way branching.Item Editorial(The Eurographics Association and Blackwell Publishing Ltd., 2014) Holly Rushmeier and Oliver DeussenItem Mobility‐Trees for Indoor Scenes Manipulation(The Eurographics Association and John Wiley and Sons Ltd., 2014) Sharf, Andrei; Huang, Hui; Liang, Cheng; Zhang, Jiapei; Chen, Baoquan; Gong, Minglun; Holly Rushmeier and Oliver DeussenIn this work, we introduce the ‘mobility‐tree’ construct for high‐level functional representation of complex 3D indoor scenes. In recent years, digital indoor scenes are becoming increasingly popular, consisting of detailed geometry and complex functionalities. These scenes often consist of objects that reoccur in various poses and interrelate with each other. In this work we analyse the reoccurrence of objects in the scene and automatically detect their functional mobilities. ‘Mobility’ analysis denotes the motion capabilities (i.e. degree of freedom) of an object and its subpart which typically relates to their indoor functionalities. We compute an object's mobility by analysing its spatial arrangement, repetitions and relations with other objects and store it in a ‘mobility‐tree’. Repetitive motions in the scenes are grouped in ‘mobility‐groups’, for which we develop a set of sophisticated controllers facilitating semantical high‐level editing operations. We show applications of our mobility analysis to interactive scene manipulation and reorganization, and present results for a variety of indoor scenes.In this work, we introduce the ‘mobility‐tree’ construct for high‐level functional representation of complex 3D indoor scenes. In recent years, digital indoor scenes are becoming increasingly popular, consisting of detailed geometry and complex functionalities. These scenes often consist of objects that reoccur in various poses and interrelate with each other. In this work we analyse the reoccurrence of objects in the scene and automatically detect their functional mobilities. ‘Mobility’ analysis denotes the motion capabilities (i.e. degree of freedom) of an object and its subpart which typically relates to their indoor functionalities. We compute an object's mobility by analysing its spatial arrangement, repetitions and relations with other objects and store it in a ‘mobility‐tree’.Item Robust Segmentation of Multiple Intersecting Manifolds from Unoriented Noisy Point Clouds(The Eurographics Association and John Wiley and Sons Ltd., 2014) Kustra, J.; Jalba, A.; Telea, A.; Holly Rushmeier and Oliver DeussenWe present a method for extracting complex manifolds with an arbitrary number of (self‐) intersections from unoriented point clouds containing large amounts of noise. Manifolds are formed in a three‐step process. First, small flat neighbourhoods of all possible orientations are created around all points. Next, neighbourhoods are assembled into larger quasi‐flat patches, whose overlaps give the global connectivity structure of the point cloud. Finally, curved manifolds are extracted from the patch connectivity graph via a multiple‐source flood fill. The manifolds can be reconstructed into meshed surfaces using standard existing surface reconstruction methods. We demonstrate the speed and robustness of our method on several point clouds, with applications in point cloud segmentation, denoising and medial surface reconstruction.We present a method for extracting complex manifolds with an arbitrary number of (self) intersections from unoriented point clouds containing large amounts of noise. Manifolds are formed in a three step process. First, small flat neighborhoods of all possible orientations are created around all points. Next, neighborhoods are assembled into larger quasi‐flat patches, whose overlaps determine the global connectivity structure of the point cloud. Finally, curved manifolds, as well as their intersection curves, are extracted from the patch connectivity graph via a multiple‐source flood fill. The extracted manifolds can be straightforwardly reconstructed into polygonal surfaces using standard surface reconstruction methods.Item Photons: Evolution of a Course in Data Structures(The Eurographics Association and John Wiley and Sons Ltd., 2014) Duchowski, A. T.; Holly Rushmeier and Oliver DeussenThis paper presents the evolution of a data structures and algorithms course based on a specific computer graphics problem, namely, photon mapping, as the teaching medium. The paper reports development of the course through several iterations and evaluations, dating back 5 years. The course originated as a problem-based graphics course requiring sophomore students to implement Hoppe et al.'s algorithm for surface reconstruction from unorganized points found in their SIGGRAPH '92 paper of the same title. Although the solution to this problem lends itself well to an exploration of data structures and code modularization, both of which are traditionally taught in early computer science courses, the algorithm's complexity was reflected in students' overwhelmingly negative evaluations. Subsequently, because implementation of the kd-tree was seen as the linchpin data structure, it was again featured in the problem of ray tracing trees consisting of more than 250 000 000 triangles. Eventually, because the tree rendering was thought too specific a problem, the photon mapper was chosen as the semester-long problem considered to be a suitable replacement. This paper details the resultant course description and outline, from its now three semesters of teaching.This paper presents the evolution of a data structures and algorithms course based on a specific computer graphics problem, namely photon mapping, as the teaching medium. The paper reports development of the course through several iterations and evaluations, dating back five years.Item On Near Optimal Lattice Quantization of Multi‐Dimensional Data Points(The Eurographics Association and John Wiley and Sons Ltd., 2014) Finckh, M.; Dammertz, H.; Lensch, H. P. A.; Holly Rushmeier and Oliver DeussenOne of the most elementary application of a lattice is the quantization of real‐valued s‐dimensional vectors into finite bit precision to make them representable by a digital computer. Most often, the simple s‐dimensional regular grid is used for this task where each component of the vector is quantized individually. However, it is known that other lattices perform better regarding the average quantization error. A rank‐1 lattices is a special type of lattice, where the lattice points can be described by a single s‐dimensional generator vector. Further, the number of points inside the unit cube [0, 1)s is arbitrary and can be directly enumerated by a single one‐dimensional integer value. By choosing a suitable generator vector the minimum distance between the lattice points can be maximized which, as we show, leads to a nearly optimal mean quantization error. We present methods for finding parameters for s‐dimensional maximized minimum distance rank‐1 lattices and further show their practical use in computer graphics applications.One of the most elementary application of a lattice is the quantization of real valued s‐dimensional vectors into finite bit precision to make them representable by a digital computer. Most often, the simple s‐dimensional regular grid is used for this task where each component of the vector is quantized individually. However, it is known that other lattices perform better regarding the average quantization error. A rank‐1 lattices is a special type of lattice, where the lattice points can be described by a single s‐dimensional generator vector.Item Multi‐Scale Kernels Using Random Walks(The Eurographics Association and John Wiley and Sons Ltd., 2014) Sinha, A.; Ramani, K.; Holly Rushmeier and Oliver DeussenWe introduce novel multi‐scale kernels using the random walk framework and derive corresponding embeddings and pairwise distances. The fractional moments of the rate of continuous time random walk (equivalently diffusion rate) are used to discover higher order kernels (or similarities) between pair of points. The formulated kernels are isometry, scale and tessellation invariant, can be made globally or locally shape aware and are insensitive to partial objects and noise based on the moment and influence parameters. In addition, the corresponding kernel distances and embeddings are convergent and efficiently computable. We introduce dual Green's mean signatures based on the kernels and discuss the applicability of the multi‐scale distance and embedding. Collectively, we present a unified view of popular embeddings and distance metrics while recovering intuitive probabilistic interpretations on discrete surface meshes.We introduce novel multi‐scale kernels using the random walk framework and derive corresponding embeddings and pairwise distances. The fractional moments of the rate of continuous time random walk (equivalently diffusion rate) are used to discover higher order kernels (or similarities) between pair of points. The formulated kernels are isometry, scale and tessellation invariant, can be made globally or locally shape aware and are insensitive to partial objects and noise based on the moment and influence parameters. In addition, the corresponding kernel distances and embeddings are convergent and efficiently computable.Item On Perception of Semi‐Transparent Streamlines for Three‐Dimensional Flow Visualization(The Eurographics Association and John Wiley and Sons Ltd., 2014) Mishchenko, O.; Crawfis, R.; Holly Rushmeier and Oliver DeussenOne of the standard techniques to visualize three‐dimensional flow is to use geometry primitives. This solution, when opaque primitives are used, results in high levels of occlusion, especially with dense streamline seeding. Using semi‐transparent geometry primitives can alleviate the problem of occlusion. However, with semi‐transparency some parts of the data set become too vague and blurry, while others are still heavily occluded. We conducted a user study that provided us with results on perceptual limits of using semi‐transparent geometry primitives for flow visualization. Texture models for semi‐transparent streamlines were introduced. Test subjects were shown multiple overlaying layers of streamlines and recorded how many different flow directions they were able to perceive. The user study allowed us to identify a set of top scoring textures. We discuss the results of the user study, provide guidelines on using semi‐transparency for three‐dimensional flow visualization and show how varying textures for different streamlines can further enhance the perception of dense streamlines. We also discuss the strategies for dealing with very high levels of occlusion. The strategies are per‐pixel filtering of flow directions, when only some of the streamlines are rendered at a particular pixel, and opacity normalization, a way of altering the opacity of overlapping streamlines with the same direction. We illustrate our results with a variety of visualizations.One of the standard techniques to visualize three‐dimensional flow is to use geometry primitives. This solution, when opaque primitives are used, results in high levels of occlusion, especially with dense streamline seeding. Using semi‐transparent geometry primitives can alleviate the problem of occlusion. However, with semi‐transparency some parts of the data set become too vague and blurry, while others are still heavily occluded. We conducted a user study that provided us with results on perceptual limits of using semi‐transparent geometry primitives for flow visualization. Texture models for semi‐transparent streamlines were introduced. Test subjects were shown multiple overlaying layers of streamlines and recorded how many different flow directions they were able to perceive. The user study allowed us to identify a set of top scoring textures. We discuss the results of the user study, provide guidelines on using semi‐transparency for three‐dimensional flow visualization and show how varying textures for different streamlines can further enhance the perception of dense streamlines. We also discuss the strategies for dealing with very high levels of occlusion. The strategies are per‐pixel filtering of flow directions, when only some of the streamlines are rendered at a particular pixel, and opacity normalization, a way of altering the opacity of overlapping streamlines with the same direction. We illustrate our results with a variety of visualizations.Item Appearance Stylization of Manhattan World Buildings(The Eurographics Association and John Wiley and Sons Ltd., 2014) Li, C.; Willis, P. J.; Brown, M.; Holly Rushmeier and Oliver DeussenWe propose a method that generates stylized building models from examples (Figure ). Our method only requires minimal user input to capture the appearance of a Manhattan world (MW) building, and can automatically retarget the captured ‘look and feel’ to new models. The key contribution is a novel representation, namely the ‘style sheet’, that is captured independently from a building's structure. It summarizes characteristic shape and texture patterns on the building. In the retargeting stage, a style sheet is used to decorate new buildings of potentially different structures. Consistent face groups are proposed to capture complex texture patterns from the example model and to preserve the patterns in the retarget models. We will demonstrate how to learn such style sheets from different MW buildings and the results of using them to generate novel models.We propose a method that generates stylized building models from examples. Our method only requires minimal user input to capture the appearance of a Manhattan world building, and can automatically retarget the captured “look and feel” to new models. The key contribution is a novel representation, namely the “style sheet”, that is captured independently from a building's structure. It summarizes characteristic shape and texture patterns on the building. In the retargeting stage, a style sheet is used to decorate new buildings of potentially different structures. Consistent face groups are proposed to capture complex texture patterns from the example model and to preserve the patterns in the retarget models.Item Visibility Silhouettes for Semi‐Analytic Spherical Integration(The Eurographics Association and John Wiley and Sons Ltd., 2014) Nowrouzezahrai, Derek; Baran, Ilya; Mitchell, Kenny; Jarosz, Wojciech; Holly Rushmeier and Oliver DeussenAt each shade point, the spherical visibility function encodes occlusion from surrounding geometry, in all directions. Computing this function is difficult and point‐sampling approaches, such as ray‐tracing or hardware shadow mapping, are traditionally used to efficiently approximate it. We propose a semi‐analytic solution to the problem where the spherical silhouette of the visibility is computed using a search over a 4D dual mesh of the scene. Once computed, we are able to semi‐analytically integrate visibility‐masked spherical functions along the visibility silhouette, instead of over the entire hemisphere. In this way, we avoid the artefacts that arise from using point‐sampling strategies to integrate visibility, a function with unbounded frequency content. We demonstrate our approach on several applications, including direct illumination from realistic lighting and computation of pre‐computed radiance transfer data. Additionally, we present a new frequency‐space method for exactly computing all‐frequency shadows on diffuse surfaces. Our results match ground truth computed using importance‐sampled stratified Monte Carlo ray‐tracing, with comparable performance on scenes with low‐to‐moderate geometric complexity.Item Boosting Techniques for Physics‐Based Vortex Detection(The Eurographics Association and John Wiley and Sons Ltd., 2014) Zhang, L.; Deng, Q.; Machiraju, R.; Rangarajan, A.; Thompson, D.; Walters, D. K.; Shen, H.‐W.; Holly Rushmeier and Oliver DeussenRobust automated vortex detection algorithms are needed to facilitate the exploration of large‐scale turbulent fluid flow simulations. Unfortunately, robust non‐local vortex detection algorithms are computationally intractable for large data sets and local algorithms, while computationally tractable, lack robustness. We argue that the deficiencies inherent to the local definitions occur because of two fundamental issues: the lack of a rigorous definition of a vortex and the fact that a vortex is an intrinsically non‐local phenomenon. As a first step towards addressing this problem, we demonstrate the use of machine learning techniques to enhance the robustness of local vortex detection algorithms. We motivate the presence of an expert‐in‐the‐loop using empirical results based on machine learning techniques. We employ adaptive boosting to combine a suite of widely used, local vortex detection algorithms, which we term weak classifiers, into a robust compound classifier. Fundamentally, the training phase of the algorithm, in which an expert manually labels small, spatially contiguous regions of the data, incorporates non‐local information into the resulting compound classifier. We demonstrate the efficacy of our approach by applying the compound classifier to two data sets obtained from computational fluid dynamical simulations. Our results demonstrate that the compound classifier has a reduced misclassification rate relative to the component classifiers.Robust automated vortex detection algorithms are needed to facilitate the exploration of large‐scale turbulent fluid flow simulations. Unfortunately, robust non‐local vortex detection algorithms are computationally intractable for large data sets and local algorithms, while computationally tractable, lack robustness. We argue that the deficiencies inherent to the local definitions occur because of two fundamental issues: the lack of a rigorous definition of a vortex and the fact that a vortex is an intrinsically non‐local phenomenon. As a first step towards addressing this problem, we demonstrate the use of machine learning techniques to enhance the robustness of local vortex detection algorithms.Item Low‐Cost Subpixel Rendering for Diverse Displays(The Eurographics Association and John Wiley and Sons Ltd., 2014) Engelhardt, Thomas; Schmidt, Thorsten‐Walther; Kautz, Jan; Dachsbacher, Carsten; Holly Rushmeier and Oliver DeussenSubpixel rendering increases the apparent display resolution by taking into account the subpixel structure of a given display. In essence, each subpixel is addressed individually, allowing the underlying signal to be sampled more densely. Unfortunately, naïve subpixel sampling introduces colour aliasing, as each subpixel only displays a specific colour (usually R, G and B subpixels are used). As previous work has shown, chromatic aliasing can be reduced significantly by taking the sensitivity of the human visual system into account. In this work, we find optimal filters for subpixel rendering for a diverse set of 1D and 2D subpixel layout patterns. We demonstrate that these optimal filters can be approximated well with analytical functions. We incorporate our filters into GPU‐based multi‐sample anti‐aliasing to yield subpixel rendering at a very low cost (1–2 ms filtering time at HD resolution). We also show that texture filtering can be adapted to perform efficient subpixel rendering. Finally, we analyse the findings of a user study we performed, which underpins the increased visual fidelity that can be achieved for diverse display layouts, by using our optimal filters.Subpixel rendering increases the apparent display resolution by taking into account the subpixel structure of a given display. In essence, each subpixel is addressed individually, allowing the underlying signal to be sampled more densely. Unfortunately, naïve subpixel sampling introduces colour aliasing, as each subpixel only displays a specific colour (usually R, G, and B subpixels are used). As previous work has shown, chromatic aliasing can be reduced significantly by taking the sensitivity of the human visual system into account. In this work, wefind optimal filters for subpixel rendering for a diverse set of 1D and 2D subpixel layout patterns.Item Subdivision Surfaces with Creases and Truncated Multiple Knot Lines(The Eurographics Association and John Wiley and Sons Ltd., 2014) Kosinka, J.; Sabin, M. A.; Dodgson, N. A.; Holly Rushmeier and Oliver DeussenWe deal with subdivision schemes based on arbitrary degree B‐splines. We focus on extraordinary knots which exhibit various levels of complexity in terms of both valency and multiplicity of knot lines emanating from such knots. The purpose of truncated multiple knot lines is to model creases which fair out. Our construction supports any degree and any knot line multiplicity and provides a modelling framework familiar to users used to B‐splines and NURBS systems.We deal with subdivision schemes based on arbitrary degree B‐splines. We focus on extraordinary knots which exhibit various levels of complexity in terms of both valency and multiplicity of knot lines emanating from such knots. The purpose of truncated multiple knot lines is to model creases which fair out. Our construction supports any degree and any knot line multiplicity and provides a modelling framework familiar to users used to B‐splines and NURBS systems.Item Modelling of Non‐Periodic Aggregates Having a Pile Structure(The Eurographics Association and John Wiley and Sons Ltd., 2014) Sakurai, K.; Miyata, K.; Holly Rushmeier and Oliver DeussenThis paper presents a procedure for modelling aggregates such as piles that consist of arbitrary components. The method generates an aggregate of components that need to be accumulated, and an aggregate shape represents the surface of the target aggregate. The number of components and their positions and orientations are controlled by five parameters. The components, the aggregate shape and the parameters are the inputs for the method which involves placement and refinement steps. In the placement step, the orientation and initial position of a component are determined by a non‐periodic placement such that each component overlaps its neighbours. In the refinement step, to construct a pile structure, the position of each component is adjusted by reducing the overlap.This paper presents a procedure for modelling aggregates such as piles that consist of arbitrary components. The method generates an aggregate of components that need to be accumulated, and an aggregate shape represents the surface of the target aggregate. The number of components and their positions and orientations are controlled by five parameters. The components, the aggregate shape and the parameters are the inputs for the method which involves placement and refinement steps. In the placement step, the orientation and initial position of a component are determined by a non‐periodic placement such that each component overlaps its neighbours. In the refinement step, to construct a pile structure, the position of each component is adjusted by reducing the overlap.Item Implicit Decals: Interactive Editing of Repetitive Patterns on Surfaces(The Eurographics Association and John Wiley and Sons Ltd., 2014) Groot, Erwin; Wyvill, Brian; Barthe, Loïc; Nasri, Ahmad; Lalonde, Paul; Holly Rushmeier and Oliver DeussenTexture mapping is an essential component for creating 3D models and is widely used in both the game and the movie industries. Creating texture maps has always been a complex task and existing methods carefully balance flexibility with ease of use. One difficulty in using texturing is the repeated placement of individual textures over larger areas. In this paper, we propose a method which uses decals to place images onto a model. Our method allows the decals to compete for space and to deform as they are being pushed by other decals. A spherical field function is used to determine the position and the size of each decal and the deformation applied to fit the decals. The decals may span multiple objects with heterogeneous representations. Our method does not require an explicit parametrization of the model. As such, varieties of patterns, including repeated patterns like rocks, tiles and scales can be mapped. We have implemented the method using the GPU where placement, size and orientation of thousands of decals are manipulated in real time.Texture mapping is an essential component for creating 3D models and is widely used in both the game and the movie industries. Creating texture maps has always been a complex task and existing methods carefully balance flexibility with ease of use. One difficulty in using texturing is the repeated placement of individual textures over larger areas. In this paper we propose a method which uses decals to place images onto a model. Our method allows the decals to compete for space and to deform as they are being pushed by other decals.Item Image Space Rendering of Point Clouds Using the HPR Operator(The Eurographics Association and John Wiley and Sons Ltd., 2014) Silva, R. Machado e; Esperança, C.; Marroquim, R.; Oliveira, A. A. F.; Holly Rushmeier and Oliver DeussenThe hidden point removal (HPR) operator introduced by Katz et al. [KTB07] provides an elegant solution for the problem of estimating the visibility of points in point samplings of surfaces. Since the method requires computing the three‐dimensional convex hull of a set with the same cardinality as the original cloud, the method has been largely viewed as impractical for real‐time rendering of medium to large clouds. In this paper we examine how the HPR operator can be used more efficiently by combining several image space techniques, including an approximate convex hull algorithm, cloud sampling, and GPU programming. Experiments show that this combination permits faster renderings without overly compromising the accuracy.The hidden point removal (HPR) operator introduced by Katz et al. [KTB07] provides an elegant solution for the problem of estimating the visibility of points in point samplings of surfaces. Since the method requires computing the three‐dimensional convex hull of a set with the same cardinality as the original cloud, the method has been largely viewed as impractical for real‐time rendering of medium to large clouds. In this paper we examine how the HPR operator can be used more efficiently by combining several image space techniques, including an approximate convex hull algorithm, cloud sampling, and GPU programming. Experiments show that this combination permits faster renderings without overly compromising the accuracy.Item A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering(The Eurographics Association and John Wiley and Sons Ltd., 2014) Jönsson, Daniel; Sundén, Erik; Ynnerman, Anders; Ropinski, Timo; Holly Rushmeier and Oliver DeussenInteractive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behaviour as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behavior as well as their perceptual capabilities.