33-Issue 1
Permanent URI for this collection
Browse
Browsing 33-Issue 1 by Title
Now showing 1 - 20 of 24
Results Per Page
Sort Options
Item Appearance Stylization of Manhattan World Buildings(The Eurographics Association and John Wiley and Sons Ltd., 2014) Li, C.; Willis, P. J.; Brown, M.; Holly Rushmeier and Oliver DeussenWe propose a method that generates stylized building models from examples (Figure ). Our method only requires minimal user input to capture the appearance of a Manhattan world (MW) building, and can automatically retarget the captured ‘look and feel’ to new models. The key contribution is a novel representation, namely the ‘style sheet’, that is captured independently from a building's structure. It summarizes characteristic shape and texture patterns on the building. In the retargeting stage, a style sheet is used to decorate new buildings of potentially different structures. Consistent face groups are proposed to capture complex texture patterns from the example model and to preserve the patterns in the retarget models. We will demonstrate how to learn such style sheets from different MW buildings and the results of using them to generate novel models.We propose a method that generates stylized building models from examples. Our method only requires minimal user input to capture the appearance of a Manhattan world building, and can automatically retarget the captured “look and feel” to new models. The key contribution is a novel representation, namely the “style sheet”, that is captured independently from a building's structure. It summarizes characteristic shape and texture patterns on the building. In the retargeting stage, a style sheet is used to decorate new buildings of potentially different structures. Consistent face groups are proposed to capture complex texture patterns from the example model and to preserve the patterns in the retarget models.Item Boosting Techniques for Physics‐Based Vortex Detection(The Eurographics Association and John Wiley and Sons Ltd., 2014) Zhang, L.; Deng, Q.; Machiraju, R.; Rangarajan, A.; Thompson, D.; Walters, D. K.; Shen, H.‐W.; Holly Rushmeier and Oliver DeussenRobust automated vortex detection algorithms are needed to facilitate the exploration of large‐scale turbulent fluid flow simulations. Unfortunately, robust non‐local vortex detection algorithms are computationally intractable for large data sets and local algorithms, while computationally tractable, lack robustness. We argue that the deficiencies inherent to the local definitions occur because of two fundamental issues: the lack of a rigorous definition of a vortex and the fact that a vortex is an intrinsically non‐local phenomenon. As a first step towards addressing this problem, we demonstrate the use of machine learning techniques to enhance the robustness of local vortex detection algorithms. We motivate the presence of an expert‐in‐the‐loop using empirical results based on machine learning techniques. We employ adaptive boosting to combine a suite of widely used, local vortex detection algorithms, which we term weak classifiers, into a robust compound classifier. Fundamentally, the training phase of the algorithm, in which an expert manually labels small, spatially contiguous regions of the data, incorporates non‐local information into the resulting compound classifier. We demonstrate the efficacy of our approach by applying the compound classifier to two data sets obtained from computational fluid dynamical simulations. Our results demonstrate that the compound classifier has a reduced misclassification rate relative to the component classifiers.Robust automated vortex detection algorithms are needed to facilitate the exploration of large‐scale turbulent fluid flow simulations. Unfortunately, robust non‐local vortex detection algorithms are computationally intractable for large data sets and local algorithms, while computationally tractable, lack robustness. We argue that the deficiencies inherent to the local definitions occur because of two fundamental issues: the lack of a rigorous definition of a vortex and the fact that a vortex is an intrinsically non‐local phenomenon. As a first step towards addressing this problem, we demonstrate the use of machine learning techniques to enhance the robustness of local vortex detection algorithms.Item Controlled Metamorphosis Between Skeleton‐Driven Animated Polyhedral Meshes of Arbitrary Topologies(The Eurographics Association and John Wiley and Sons Ltd., 2014) Kravtsov, Denis; Fryazinov, Oleg; Adzhiev, Valery; Pasko, Alexander; Comninos, Peter; Holly Rushmeier and Oliver DeussenEnabling animators to smoothly transform between animated meshes of differing topologies is a long‐standing problem in geometric modelling and computer animation. In this paper, we propose a new hybrid approach built upon the advantages of scalar field‐based models (often called implicit surfaces) which can easily change their topology by changing their defining scalar field. Given two meshes, animated by their rigging‐skeletons, we associate each mesh with its own approximating implicit surface. This implicit surface moves synchronously with the mesh. The shape‐metamorphosis process is performed in several steps: first, we collapse the two meshes to their corresponding approximating implicit surfaces, then we transform between the two implicit surfaces and finally we inverse transition from the resulting metamorphosed implicit surface to the target mesh. The examples presented in this paper demonstrating the results of the proposed technique were implemented using an in‐house plug‐in for Maya™.Enabling animators to smoothly transform between animated meshes of differing topologies is a long‐standing problem in geometric modelling and computer animation. In this paper, we propose a new hybrid approach built upon the advantages of scalar field‐based models (often called implicit surfaces) which can easily change their topology by changing their defining scalar field. Given two meshes, animated by their rigging‐skeletons, we associate each mesh with its own approximating implicit surface. This implicit surface moves synchronously with the mesh. The shape‐metamorphosis process is performed in several steps: first, we collapse the two meshes to their corresponding approximating implicit surfaces, then we transform between the two implicit surfaces.Item Editorial(The Eurographics Association and Blackwell Publishing Ltd., 2014) Holly Rushmeier and Oliver DeussenItem Image Space Rendering of Point Clouds Using the HPR Operator(The Eurographics Association and John Wiley and Sons Ltd., 2014) Silva, R. Machado e; Esperança, C.; Marroquim, R.; Oliveira, A. A. F.; Holly Rushmeier and Oliver DeussenThe hidden point removal (HPR) operator introduced by Katz et al. [KTB07] provides an elegant solution for the problem of estimating the visibility of points in point samplings of surfaces. Since the method requires computing the three‐dimensional convex hull of a set with the same cardinality as the original cloud, the method has been largely viewed as impractical for real‐time rendering of medium to large clouds. In this paper we examine how the HPR operator can be used more efficiently by combining several image space techniques, including an approximate convex hull algorithm, cloud sampling, and GPU programming. Experiments show that this combination permits faster renderings without overly compromising the accuracy.The hidden point removal (HPR) operator introduced by Katz et al. [KTB07] provides an elegant solution for the problem of estimating the visibility of points in point samplings of surfaces. Since the method requires computing the three‐dimensional convex hull of a set with the same cardinality as the original cloud, the method has been largely viewed as impractical for real‐time rendering of medium to large clouds. In this paper we examine how the HPR operator can be used more efficiently by combining several image space techniques, including an approximate convex hull algorithm, cloud sampling, and GPU programming. Experiments show that this combination permits faster renderings without overly compromising the accuracy.Item Implicit Decals: Interactive Editing of Repetitive Patterns on Surfaces(The Eurographics Association and John Wiley and Sons Ltd., 2014) Groot, Erwin; Wyvill, Brian; Barthe, Loïc; Nasri, Ahmad; Lalonde, Paul; Holly Rushmeier and Oliver DeussenTexture mapping is an essential component for creating 3D models and is widely used in both the game and the movie industries. Creating texture maps has always been a complex task and existing methods carefully balance flexibility with ease of use. One difficulty in using texturing is the repeated placement of individual textures over larger areas. In this paper, we propose a method which uses decals to place images onto a model. Our method allows the decals to compete for space and to deform as they are being pushed by other decals. A spherical field function is used to determine the position and the size of each decal and the deformation applied to fit the decals. The decals may span multiple objects with heterogeneous representations. Our method does not require an explicit parametrization of the model. As such, varieties of patterns, including repeated patterns like rocks, tiles and scales can be mapped. We have implemented the method using the GPU where placement, size and orientation of thousands of decals are manipulated in real time.Texture mapping is an essential component for creating 3D models and is widely used in both the game and the movie industries. Creating texture maps has always been a complex task and existing methods carefully balance flexibility with ease of use. One difficulty in using texturing is the repeated placement of individual textures over larger areas. In this paper we propose a method which uses decals to place images onto a model. Our method allows the decals to compete for space and to deform as they are being pushed by other decals.Item Interactive Simulation of Rigid Body Dynamics in Computer Graphics(The Eurographics Association and John Wiley and Sons Ltd., 2014) Bender, Jan; Erleben, Kenny; Trinkle, Jeff; Holly Rushmeier and Oliver DeussenInteractive rigid body simulation is an important part of many modern computer tools, which no authoring tool nor game engine can do without. Such high-performance computer tools open up new possibilities for changing how designers, engineers, modelers and animators work with their design problems. This paper is a self contained state-of-the-art report on the physics, the models, the numerical methods and the algorithms used in interactive rigid body simulation all of which have evolved and matured over the past 20 years. Furthermore, the paper communicates the mathematical and theoretical details in a pedagogical manner. This paper is not only a stake in the sand on what has been done, it also seeks to give the reader deeper insights to help guide their future research.Item Low‐Cost Subpixel Rendering for Diverse Displays(The Eurographics Association and John Wiley and Sons Ltd., 2014) Engelhardt, Thomas; Schmidt, Thorsten‐Walther; Kautz, Jan; Dachsbacher, Carsten; Holly Rushmeier and Oliver DeussenSubpixel rendering increases the apparent display resolution by taking into account the subpixel structure of a given display. In essence, each subpixel is addressed individually, allowing the underlying signal to be sampled more densely. Unfortunately, naïve subpixel sampling introduces colour aliasing, as each subpixel only displays a specific colour (usually R, G and B subpixels are used). As previous work has shown, chromatic aliasing can be reduced significantly by taking the sensitivity of the human visual system into account. In this work, we find optimal filters for subpixel rendering for a diverse set of 1D and 2D subpixel layout patterns. We demonstrate that these optimal filters can be approximated well with analytical functions. We incorporate our filters into GPU‐based multi‐sample anti‐aliasing to yield subpixel rendering at a very low cost (1–2 ms filtering time at HD resolution). We also show that texture filtering can be adapted to perform efficient subpixel rendering. Finally, we analyse the findings of a user study we performed, which underpins the increased visual fidelity that can be achieved for diverse display layouts, by using our optimal filters.Subpixel rendering increases the apparent display resolution by taking into account the subpixel structure of a given display. In essence, each subpixel is addressed individually, allowing the underlying signal to be sampled more densely. Unfortunately, naïve subpixel sampling introduces colour aliasing, as each subpixel only displays a specific colour (usually R, G, and B subpixels are used). As previous work has shown, chromatic aliasing can be reduced significantly by taking the sensitivity of the human visual system into account. In this work, wefind optimal filters for subpixel rendering for a diverse set of 1D and 2D subpixel layout patterns.Item Mobility‐Trees for Indoor Scenes Manipulation(The Eurographics Association and John Wiley and Sons Ltd., 2014) Sharf, Andrei; Huang, Hui; Liang, Cheng; Zhang, Jiapei; Chen, Baoquan; Gong, Minglun; Holly Rushmeier and Oliver DeussenIn this work, we introduce the ‘mobility‐tree’ construct for high‐level functional representation of complex 3D indoor scenes. In recent years, digital indoor scenes are becoming increasingly popular, consisting of detailed geometry and complex functionalities. These scenes often consist of objects that reoccur in various poses and interrelate with each other. In this work we analyse the reoccurrence of objects in the scene and automatically detect their functional mobilities. ‘Mobility’ analysis denotes the motion capabilities (i.e. degree of freedom) of an object and its subpart which typically relates to their indoor functionalities. We compute an object's mobility by analysing its spatial arrangement, repetitions and relations with other objects and store it in a ‘mobility‐tree’. Repetitive motions in the scenes are grouped in ‘mobility‐groups’, for which we develop a set of sophisticated controllers facilitating semantical high‐level editing operations. We show applications of our mobility analysis to interactive scene manipulation and reorganization, and present results for a variety of indoor scenes.In this work, we introduce the ‘mobility‐tree’ construct for high‐level functional representation of complex 3D indoor scenes. In recent years, digital indoor scenes are becoming increasingly popular, consisting of detailed geometry and complex functionalities. These scenes often consist of objects that reoccur in various poses and interrelate with each other. In this work we analyse the reoccurrence of objects in the scene and automatically detect their functional mobilities. ‘Mobility’ analysis denotes the motion capabilities (i.e. degree of freedom) of an object and its subpart which typically relates to their indoor functionalities. We compute an object's mobility by analysing its spatial arrangement, repetitions and relations with other objects and store it in a ‘mobility‐tree’.Item Modelling of Non‐Periodic Aggregates Having a Pile Structure(The Eurographics Association and John Wiley and Sons Ltd., 2014) Sakurai, K.; Miyata, K.; Holly Rushmeier and Oliver DeussenThis paper presents a procedure for modelling aggregates such as piles that consist of arbitrary components. The method generates an aggregate of components that need to be accumulated, and an aggregate shape represents the surface of the target aggregate. The number of components and their positions and orientations are controlled by five parameters. The components, the aggregate shape and the parameters are the inputs for the method which involves placement and refinement steps. In the placement step, the orientation and initial position of a component are determined by a non‐periodic placement such that each component overlaps its neighbours. In the refinement step, to construct a pile structure, the position of each component is adjusted by reducing the overlap.This paper presents a procedure for modelling aggregates such as piles that consist of arbitrary components. The method generates an aggregate of components that need to be accumulated, and an aggregate shape represents the surface of the target aggregate. The number of components and their positions and orientations are controlled by five parameters. The components, the aggregate shape and the parameters are the inputs for the method which involves placement and refinement steps. In the placement step, the orientation and initial position of a component are determined by a non‐periodic placement such that each component overlaps its neighbours. In the refinement step, to construct a pile structure, the position of each component is adjusted by reducing the overlap.Item Multi‐Scale Kernels Using Random Walks(The Eurographics Association and John Wiley and Sons Ltd., 2014) Sinha, A.; Ramani, K.; Holly Rushmeier and Oliver DeussenWe introduce novel multi‐scale kernels using the random walk framework and derive corresponding embeddings and pairwise distances. The fractional moments of the rate of continuous time random walk (equivalently diffusion rate) are used to discover higher order kernels (or similarities) between pair of points. The formulated kernels are isometry, scale and tessellation invariant, can be made globally or locally shape aware and are insensitive to partial objects and noise based on the moment and influence parameters. In addition, the corresponding kernel distances and embeddings are convergent and efficiently computable. We introduce dual Green's mean signatures based on the kernels and discuss the applicability of the multi‐scale distance and embedding. Collectively, we present a unified view of popular embeddings and distance metrics while recovering intuitive probabilistic interpretations on discrete surface meshes.We introduce novel multi‐scale kernels using the random walk framework and derive corresponding embeddings and pairwise distances. The fractional moments of the rate of continuous time random walk (equivalently diffusion rate) are used to discover higher order kernels (or similarities) between pair of points. The formulated kernels are isometry, scale and tessellation invariant, can be made globally or locally shape aware and are insensitive to partial objects and noise based on the moment and influence parameters. In addition, the corresponding kernel distances and embeddings are convergent and efficiently computable.Item Occluder Simplification Using Planar Sections(The Eurographics Association and John Wiley and Sons Ltd., 2014) Silvennoinen, Ari; Saransaari, Hannu; Laine, Samuli; Lehtinen, Jaakko; Holly Rushmeier and Oliver DeussenWe present a method for extreme occluder simplification. We take a triangle soup as input, and produce a small set of polygons with closely matching occlusion properties. In contrast to methods that optimize the original geometry, our algorithm has very few requirements for the input— specifically, the input does not need to be a watertight, two‐manifold mesh. This robustness is achieved by working on a well‐behaved, discretized representation of the input instead of the original, potentially badly structured geometry. We first formulate the algorithm for individual occluders, and further introduce a hierarchy for handling large, complex scenes.We present a method for extreme occluder simplification. We take a triangle soup as input, and produce a small set of polygons with closely matching occlusion properties. In contrast to methods that optimize the original geometry, our algorithm has very few requirements for the input— specifically, the input does not need to be a watertight, two‐manifold mesh. This robustness is achieved by working on a well‐behaved, discretized representation of the input instead of the original, potentially badly structured geometry. We first formulate the algorithm for individual occluders, and further introduce a hierarchy for handling large, complex scenes.Item On Near Optimal Lattice Quantization of Multi‐Dimensional Data Points(The Eurographics Association and John Wiley and Sons Ltd., 2014) Finckh, M.; Dammertz, H.; Lensch, H. P. A.; Holly Rushmeier and Oliver DeussenOne of the most elementary application of a lattice is the quantization of real‐valued s‐dimensional vectors into finite bit precision to make them representable by a digital computer. Most often, the simple s‐dimensional regular grid is used for this task where each component of the vector is quantized individually. However, it is known that other lattices perform better regarding the average quantization error. A rank‐1 lattices is a special type of lattice, where the lattice points can be described by a single s‐dimensional generator vector. Further, the number of points inside the unit cube [0, 1)s is arbitrary and can be directly enumerated by a single one‐dimensional integer value. By choosing a suitable generator vector the minimum distance between the lattice points can be maximized which, as we show, leads to a nearly optimal mean quantization error. We present methods for finding parameters for s‐dimensional maximized minimum distance rank‐1 lattices and further show their practical use in computer graphics applications.One of the most elementary application of a lattice is the quantization of real valued s‐dimensional vectors into finite bit precision to make them representable by a digital computer. Most often, the simple s‐dimensional regular grid is used for this task where each component of the vector is quantized individually. However, it is known that other lattices perform better regarding the average quantization error. A rank‐1 lattices is a special type of lattice, where the lattice points can be described by a single s‐dimensional generator vector.Item On Perception of Semi‐Transparent Streamlines for Three‐Dimensional Flow Visualization(The Eurographics Association and John Wiley and Sons Ltd., 2014) Mishchenko, O.; Crawfis, R.; Holly Rushmeier and Oliver DeussenOne of the standard techniques to visualize three‐dimensional flow is to use geometry primitives. This solution, when opaque primitives are used, results in high levels of occlusion, especially with dense streamline seeding. Using semi‐transparent geometry primitives can alleviate the problem of occlusion. However, with semi‐transparency some parts of the data set become too vague and blurry, while others are still heavily occluded. We conducted a user study that provided us with results on perceptual limits of using semi‐transparent geometry primitives for flow visualization. Texture models for semi‐transparent streamlines were introduced. Test subjects were shown multiple overlaying layers of streamlines and recorded how many different flow directions they were able to perceive. The user study allowed us to identify a set of top scoring textures. We discuss the results of the user study, provide guidelines on using semi‐transparency for three‐dimensional flow visualization and show how varying textures for different streamlines can further enhance the perception of dense streamlines. We also discuss the strategies for dealing with very high levels of occlusion. The strategies are per‐pixel filtering of flow directions, when only some of the streamlines are rendered at a particular pixel, and opacity normalization, a way of altering the opacity of overlapping streamlines with the same direction. We illustrate our results with a variety of visualizations.One of the standard techniques to visualize three‐dimensional flow is to use geometry primitives. This solution, when opaque primitives are used, results in high levels of occlusion, especially with dense streamline seeding. Using semi‐transparent geometry primitives can alleviate the problem of occlusion. However, with semi‐transparency some parts of the data set become too vague and blurry, while others are still heavily occluded. We conducted a user study that provided us with results on perceptual limits of using semi‐transparent geometry primitives for flow visualization. Texture models for semi‐transparent streamlines were introduced. Test subjects were shown multiple overlaying layers of streamlines and recorded how many different flow directions they were able to perceive. The user study allowed us to identify a set of top scoring textures. We discuss the results of the user study, provide guidelines on using semi‐transparency for three‐dimensional flow visualization and show how varying textures for different streamlines can further enhance the perception of dense streamlines. We also discuss the strategies for dealing with very high levels of occlusion. The strategies are per‐pixel filtering of flow directions, when only some of the streamlines are rendered at a particular pixel, and opacity normalization, a way of altering the opacity of overlapping streamlines with the same direction. We illustrate our results with a variety of visualizations.Item Photons: Evolution of a Course in Data Structures(The Eurographics Association and John Wiley and Sons Ltd., 2014) Duchowski, A. T.; Holly Rushmeier and Oliver DeussenThis paper presents the evolution of a data structures and algorithms course based on a specific computer graphics problem, namely, photon mapping, as the teaching medium. The paper reports development of the course through several iterations and evaluations, dating back 5 years. The course originated as a problem-based graphics course requiring sophomore students to implement Hoppe et al.'s algorithm for surface reconstruction from unorganized points found in their SIGGRAPH '92 paper of the same title. Although the solution to this problem lends itself well to an exploration of data structures and code modularization, both of which are traditionally taught in early computer science courses, the algorithm's complexity was reflected in students' overwhelmingly negative evaluations. Subsequently, because implementation of the kd-tree was seen as the linchpin data structure, it was again featured in the problem of ray tracing trees consisting of more than 250 000 000 triangles. Eventually, because the tree rendering was thought too specific a problem, the photon mapper was chosen as the semester-long problem considered to be a suitable replacement. This paper details the resultant course description and outline, from its now three semesters of teaching.This paper presents the evolution of a data structures and algorithms course based on a specific computer graphics problem, namely photon mapping, as the teaching medium. The paper reports development of the course through several iterations and evaluations, dating back five years.Item Projection Mapping on Arbitrary Cubic Cell Complexes(The Eurographics Association and John Wiley and Sons Ltd., 2014) Apaza‐Agüero, K.; Silva, L.; Bellon, O. R. P. ; Holly Rushmeier and Oliver DeussenThis work presents a new representation used as a rendering primitive of surfaces. Our representation is defined by an arbitrary cubic cell complex: a projection‐based parameterization domain for surfaces where geometry and appearance information are stored as tile textures. This representation is used by our ray casting rendering algorithm called projection mapping, which can be used for rendering geometry and appearance details of surfaces from arbitrary viewpoints. The projection mapping algorithm uses a fragment shader based on linear and binary searches of the relief mapping algorithm. Instead of traditionally rendering the surface, only front faces of our rendering primitive (our arbitrary cubic cell complex) are drawn, and geometry and appearance details of the surface are rendered back by using projection mapping. Alternatively, another method is proposed for mapping appearance information on complex surfaces using our arbitrary cubic cell complexes. In this case, instead of reconstructing the geometry as in projection mapping, the original mesh of a surface is directly passed to the rendering algorithm. This algorithm is applied in the texture mapping of cultural heritage sculptures.This work presents a new representation used as a rendering primitive of surfaces. Our representation is defined by an arbitrary cubic cell complex: a projection‐based parameterization domain for surfaces where geometry and appearance information are stored as tile textures. This representation is used by our ray casting rendering algorithm called projection mapping, which can be used for rendering geometry and appearance details of surfaces from arbitrary viewpoints. Alternatively, another method is proposed for mapping appearance information on complex surfaces using our arbitrary cubic cell complexes. In this case, instead of reconstructing the geometry as in projection mapping, the original mesh of a surface is directly passed to the rendering algorithm.Item Robust Segmentation of Multiple Intersecting Manifolds from Unoriented Noisy Point Clouds(The Eurographics Association and John Wiley and Sons Ltd., 2014) Kustra, J.; Jalba, A.; Telea, A.; Holly Rushmeier and Oliver DeussenWe present a method for extracting complex manifolds with an arbitrary number of (self‐) intersections from unoriented point clouds containing large amounts of noise. Manifolds are formed in a three‐step process. First, small flat neighbourhoods of all possible orientations are created around all points. Next, neighbourhoods are assembled into larger quasi‐flat patches, whose overlaps give the global connectivity structure of the point cloud. Finally, curved manifolds are extracted from the patch connectivity graph via a multiple‐source flood fill. The manifolds can be reconstructed into meshed surfaces using standard existing surface reconstruction methods. We demonstrate the speed and robustness of our method on several point clouds, with applications in point cloud segmentation, denoising and medial surface reconstruction.We present a method for extracting complex manifolds with an arbitrary number of (self) intersections from unoriented point clouds containing large amounts of noise. Manifolds are formed in a three step process. First, small flat neighborhoods of all possible orientations are created around all points. Next, neighborhoods are assembled into larger quasi‐flat patches, whose overlaps determine the global connectivity structure of the point cloud. Finally, curved manifolds, as well as their intersection curves, are extracted from the patch connectivity graph via a multiple‐source flood fill. The extracted manifolds can be straightforwardly reconstructed into polygonal surfaces using standard surface reconstruction methods.Item Scalable Realistic Rendering with Many‐Light Methods(The Eurographics Association and John Wiley and Sons Ltd., 2014) Dachsbacher, Carsten; Křivánek, Jaroslav; Hašan, Miloš; Arbree, Adam; Walter, Bruce; Novák, Jan; Holly Rushmeier and Oliver DeussenRecent years have seen increasing attention and significant progress in many‐light rendering, a class of methods for efficient computation of global illumination. The many‐light formulation offers a unified mathematical framework for the problem reducing the full lighting transport simulation to the calculation of the direct illumination from many virtual light sources. These methods are unrivaled in their scalability: they are able to produce plausible images in a fraction of a second but also converge to the full solution over time. In this state‐of‐the‐art report, we give an easy‐to‐follow, introductory tutorial of the many‐light theory; provide a comprehensive, unified survey of the topic with a comparison of the main algorithms; discuss limitations regarding materials and light transport phenomena and present a vision to motivate and guide future research. We will cover both the fundamental concepts as well as improvements, extensions and applications of many‐light rendering.Recent years have seen increasing attention and significant progress in many‐light rendering, a class of methods for efficient computation of global illumination. The many‐light formulation offers a unified mathematical framework for the problem reducing the full lighting transport simulation to the calculation of the direct illumination from many virtual light sources. These methods are unrivaled in their scalability: they are able to produce plausible images in a fraction of a second but also converge to the full solution over time. In this state‐of‐the‐art report, we give an easy‐to‐follow, introductory tutorial of the many‐light theory.Item Stackless Multi‐BVH Traversal for CPU, MIC and GPU Ray Tracing(The Eurographics Association and John Wiley and Sons Ltd., 2014) Áfra, Attila T.; Szirmay‐Kalos, László; Holly Rushmeier and Oliver DeussenStackless traversal algorithms for ray tracing acceleration structures require significantly less storage per ray than ordinary stack‐based ones. This advantage is important for massively parallel rendering methods, where there are many rays in flight. On SIMD architectures, a commonly used acceleration structure is the MBVH, which has multiple bounding boxes per node for improved parallelism. It scales to branching factors higher than two, for which, however, only stack‐based traversal methods have been proposed so far. In this paper, we introduce a novel stackless traversal algorithm for MBVHs with up to four‐way branching. Our approach replaces the stack with a small bitmask, supports dynamic ordered traversal, and has a low computation overhead. We also present efficient implementation techniques for recent CPU, MIC (Intel Xeon Phi) and GPU (NVIDIA Kepler) architectures.Stackless traversal algorithms for ray tracing acceleration structures require significantly less storage per ray than ordinary stack‐based ones. This advantage is important for massively parallel rendering methods, where there are many rays in flight. On SIMD architectures, a commonly used acceleration structure is the multi bounding volume hierarchy (MBVH), which has multiple bounding boxes per node for improved parallelism. It scales to branching factors higher than two, for which, however, only stack‐based traversal methods have been proposed so far. In this paper, we introduce a novel stackless traversal algorithm for MBVHs with up to 4‐way branching.Item Subdivision Surfaces with Creases and Truncated Multiple Knot Lines(The Eurographics Association and John Wiley and Sons Ltd., 2014) Kosinka, J.; Sabin, M. A.; Dodgson, N. A.; Holly Rushmeier and Oliver DeussenWe deal with subdivision schemes based on arbitrary degree B‐splines. We focus on extraordinary knots which exhibit various levels of complexity in terms of both valency and multiplicity of knot lines emanating from such knots. The purpose of truncated multiple knot lines is to model creases which fair out. Our construction supports any degree and any knot line multiplicity and provides a modelling framework familiar to users used to B‐splines and NURBS systems.We deal with subdivision schemes based on arbitrary degree B‐splines. We focus on extraordinary knots which exhibit various levels of complexity in terms of both valency and multiplicity of knot lines emanating from such knots. The purpose of truncated multiple knot lines is to model creases which fair out. Our construction supports any degree and any knot line multiplicity and provides a modelling framework familiar to users used to B‐splines and NURBS systems.