32-Issue 8
Permanent URI for this collection
Browse
Browsing 32-Issue 8 by Title
Now showing 1 - 20 of 24
Results Per Page
Sort Options
Item An Algorithm for Random Fractal Filling of Space(The Eurographics Association and Blackwell Publishing Ltd., 2013) Shier, John; Bourke, Paul; Holly Rushmeier and Oliver DeussenComputational experiments with a simple algorithm show that it is possible to fill any spatial region with a random fractalization of any shape, with a continuous range of pre‐specified fractal dimensions D. The algorithm is presented here in 1, 2 or 3 physical dimensions. The size power‐law exponent c or the fractal dimension D can be specified ab initio over a substantial range. The method creates an infinite set of shapes whose areas (lengths, volumes) obey a power law and sum to the area (length and volume) to be filled. The algorithm begins by randomly placing the largest shape and continues using random search to place each smaller shape where it does not overlap or touch any previously placed shape. The resulting gasket is a single connected object.Computational experiments with a simple algorithm show that it is possible to fill any spatial region with a random fractalization Q1 of any shape, with a continuous range of pre‐specified fractal dimensions D. The algorithm is presented here in 1, 2 or 3 physical dimensions. The size power‐law exponent c or the fractal dimension D can be specified ab initio over a substantial range. The method creates an infinite set of shapes whose areas (lengths, volumes) obey a power law and sum to the area (length and volume) to be filled.Item Atomistic Visualization of Mesoscopic Whole-Cell Simulations Using Ray-Casted Instancing(The Eurographics Association and Blackwell Publishing Ltd., 2013) Falk, Martin; Krone, Michael; Ertl, Thomas; Holly Rushmeier and Oliver DeussenMolecular visualization is an important tool for analysing the results of biochemical simulations. With modern GPU ray casting approaches, it is only possible to render several million of atoms interactively unless advanced acceleration methods are employed. Whole‐cell simulations consist of at least several billion atoms even for simplified cell models. However, many instances of only a few different proteins occur in the intracellular environment, which can be exploited to fit the data into the graphics memory. For each protein species, one model is stored and rendered once per instance. The proposed method exploits recent algorithmic advances for particle rendering and the repetitive nature of intracellular proteins to visualize dynamic results from mesoscopic simulations of cellular transport processes. We present two out‐of‐core optimizations for the interactive visualization of data sets composed of billions of atoms as well as details on the data preparation and the employed rendering techniques. Furthermore, we apply advanced shading methods to improve the image quality including methods to enhance depth and shape perception besides non‐photorealistic rendering methods. We also show that the method can be used to render scenes that are composed of triangulated instances, not only implicit surfaces.Molecular visualization is an important tool for analyzing the results of biochemical simulations. With modern GPU ray casting approaches it is only possible to render several million of atoms interactively unless advanced acceleration methods are employed. Whole‐cell simulations consist of at least several billion atoms even for simplified cell models. However, many instances of only a few different proteins occur in the intracellular environment, which can be exploited to fit the data into the graphics memory. For each protein species, one model is stored and rendered once per instance. The proposed method exploits recent algorithmic advances for particle rendering and the repetitive nature of intracellular proteins to visualize dynamic results from mesoscopic simulations of cellular transport processes with implicit surfaces and triangular meshes.Item Customizable LoD for Procedural Architecture(The Eurographics Association and Blackwell Publishing Ltd., 2013) Besuievsky, Gonzalo; Patow, Gustavo; Holly Rushmeier and Oliver DeussenThis paper presents a new semantic and procedural level-of-detail (LoD) method applicable to any rule‐based procedural building definition. This new LoD system allows the customizable and flexible selection of the architectural assets to simplify, doing it in an efficient and artist‐transparent way. The method, based on an extension of traditional grammars, uses LoD‐oriented commands. A graph‐rewriting process introduces these new commands in the artist‐provided rule set, which allows to select different simplification criteria (distance, screen‐size projection, semantic selection or any arbitrary method) through a scripting interface, according to user needs. This way we define a flexible, customizable and efficient procedural LoD system, which generates buildings directly with the correct LoD for a given set of viewing and semantic conditions.This paper presents a new semantic and procedural level‐of‐Detail (LoD) method applicable to any rule‐based procedural building definition. This new LoD system allows the customizable and flexible selection of the architectural assets to simplify, doing it in an efficient and artist‐transparent way. The method, based on an extension of traditional grammars, uses LoD‐oriented commands. A graph‐rewriting process introduces these new commands in the artist‐provided rule set, which allows to select different simplification criteria (distance, screen‐size projection, semantic selection, or any arbitrary method) through a scripting interface, according to user needs.Item Design and Fabrication of Faceted Mirror Arrays for Light Field Capture(The Eurographics Association and Blackwell Publishing Ltd., 2013) Fuchs, Martin; Kächele, Markus; Rusinkiewicz, Szymon; Holly Rushmeier and Oliver DeussenThe high resolution of digital cameras has made single‐shot, single‐sensor acquisition of light fields feasible, though considerable design effort is still necessary in order to construct the necessary collection of optical elements for particular acquisition scenarios. This paper explores a pipeline for designing, fabricating and utilizing faceted mirror arrays which simplifies this task. The foundation of the pipeline is an interactive tool that automatically optimizes for mirror designs while exposing to the user a set of intuitive parameters for light field quality and manufacturing constraints. We investigate two manufacturing processes for automatic fabrication of the resulting designs: one is based on CNC milling, polishing, and plating of one solid work piece, while the other involves assembly of CNC‐cut mirror facets. We demonstrate results for refocusing in a macro photography scenario. In addition, we observe that traditional photographic parameters take novel roles in the faceted mirror array setup and discuss their influence.The high resolution of digital cameras has made single‐shot, single‐sensor acquisition of light fields feasible, though considerable design effort is still necessary in order to construct the necessary collection of optical elements for particular acquisition scenarios. This paper explores a pipeline for designing, fabricating, and utilizing faceted mirror arrays which simplifies this task. The foundation of the pipeline is an interactive tool that automatically optimizes for mirror designs while exposing to the user a set of intuitive parameters for light field quality and manufacturing constraints. We investigate two manufacturing processes for automatic fabrication of the resulting designs: one is based on CNC milling, polishing, and plating of one solid work piece, while the other involves assembly of CNC‐cut mirror facets.Item An Efficient Algorithm for Determining an Aesthetic Shape Connecting Unorganized 2D Points(The Eurographics Association and Blackwell Publishing Ltd., 2013) Ohrhallinger, S.; Mudur, S.; Holly Rushmeier and Oliver DeussenWe present anefficient algorithm for determining an aesthetically pleasing shape boundary connecting all the points in a given unorganized set of 2D points, with no other information than point coordinates. By posing shape construction as a minimisation problem which follows the Gestalt laws, our desired shape Bmin is non‐intersecting, interpolates all points and minimizes a criterion related to these laws. The basis for our algorithm is an initial graph, an extension of the Euclidean minimum spanning tree but with no leaf nodes, called as the minimum boundary complex BCmin. BCmin and Bmin can be expressed similarly by parametrizing a topological constraint. A close approximation of BCmin, termed BC0 can be computed fast using a greedy algorithm. BC0 is then transformed into a closed interpolating boundary Bout in two steps to satisfy Bmin’s topological and minimization requirements. Computing Bmin exactly is an NP (Non‐Polynomial)‐hard problem, whereas Bout is computed in linearithmic time. We present many examples showing considerable improvement over previous techniques, especially for shapes with sharp corners. Source code is available online.We present an efficient algorithm for determining an aesthetically pleasing shape boundary connecting all the points in a given unorganised set of 2D points, with no other information than point coordinates. By posing shape construction as a minimisation problem which follows the Gestalt laws, our desired shape Bmin is non‐intersecting, interpolates all points and minimises a criterion related to these laws. The basis for our algorithm is an initial graph, an extension of the Euclidean minimum spanning tree but with no leaf nodes, called as the minimum boundary complex BCmin. BCmin and Bmin can be expressed similarly by parametrising a topological constraint. A close approximation of BCmin, termed BC0 can be computed fast using a greedy algorithm.Item Efficient Interpolation of Articulated Shapes Using Mixed Shape Spaces(The Eurographics Association and Blackwell Publishing Ltd., 2013) Marras, S.; Cashman, T. J.; Hormann, K.; Holly Rushmeier and Oliver DeussenInterpolation between compatible triangle meshes that represent different poses of some object is a fundamental operation in geometry processing. A common approach is to consider the static input shapes as points in a suitable shape space and then use simple linear interpolation in this space to find an interpolated shape. In this paper, we present a new interpolation technique that is particularly tailored for meshes that represent articulated shapes. It is up to an order of magnitude faster than state‐of‐the‐art methods and gives very similar results. To achieve this, our approach introduces a novel shape space that takes advantage of the underlying structure of articulated shapes and distinguishes between rigid parts and non‐rigid joints. This allows us to use fast vertex interpolation on the rigid parts and resort to comparatively slow edge‐based interpolation only for the joints.Interpolation between compatible triangle meshes that represent different poses of some object is a fundamental operation in geometry processing. A common approach is to consider the static input shapes as points in a suitable shape space and then use simple linear interpolation in this space to find an interpolated shape. In this paper, we present a new interpolation technique that is particularly tailored for meshes that represent articulated shapes. It is up to an order of magnitude faster than state‐of‐the‐art methods and gives very similar results. To achieve this, our approach introduces a novel shape space that takes advantage of the underlying structure of articulated shapes and distinguishes between rigid parts and non‐rigid joints.Item Fast Shadow Removal Using Adaptive Multi‐Scale Illumination Transfer(The Eurographics Association and Blackwell Publishing Ltd., 2013) Xiao, Chunxia; She, Ruiyun; Xiao, Donglin; Ma, Kwan-Liu; Holly Rushmeier and Oliver DeussenIn this paper, we present a new method for removing shadows from images. First, shadows are detected by interactive brushing assisted with a Gaussian Mixture Model. Secondly, the detected shadows are removed using an adaptive illumination transfer approach that accounts for the reflectance variation of the image texture. The contrast and noise levels of the result are then improved with a multi‐scale illumination transfer technique. Finally, any visible shadow boundaries in the image can be eliminated based on our Bayesian framework. We also extend our method to video data and achieve temporally consistent shadow‐free results.In this paper, we present a new method for removing shadows from images. First, shadows are detected by interactive brushing assisted with a Gaussian Mixture Model. Second, the detected shadows are removed using an adaptive illumination transfer approach that accounts for the reflectance variation of the image texture. The contrast and noise levels of the result are then improved with a multi‐scale illumination transfer technique. Finally, any visible shadow boundaries in the image can be eliminated based on our Bayesian framework. We also extend our method to video data and achieve temporally consistent shadow free results.Item Four‐Dimensional Geometry Lens: A Novel Volumetric Magnification Approach(The Eurographics Association and Blackwell Publishing Ltd., 2013) Li, Bo; Zhao, Xin; Qin, Hong; Holly Rushmeier and Oliver DeussenWe present a novel methodology that utilizes four-dimensional (4D) space deformation to simulate a magnification lens on versatile volume datasets and textured solid models. Compared with other magnification methods (e.g. geometric optics, mesh editing), 4D differential geometry theory and its practices are much more flexible and powerful for preserving shape features (i.e. minimizing angle distortion), and easier to adapt to versatile solid models. The primary advantage of 4D space lies at the following fact: we can now easily magnify the volume of regions of interest (ROIs) from the additional dimension, while keeping the rest region unchanged. To achieve this primary goal, we first embed a 3D volumetric input into 4D space and magnify ROIs in the fourth dimension. Then we flatten the 4D shape back into 3D space to accommodate other typical applications in the real 3D world. In order to enforce distortion minimization, in both steps we devise the high‐dimensional geometry techniques based on rigorous 4D geometry theory for 3D/4D mapping back and forth to amend the distortion. Our system can preserve not only focus region, but also context region and global shape. We demonstrate the effectiveness, robustness and efficacy of our framework with a variety of models ranging from tetrahedral meshes to volume datasets.Item Interactive Mesh Smoothing for Medical Applications(The Eurographics Association and Blackwell Publishing Ltd., 2013) Mönch, Tobias; Lawonn, Kai; Kubisch, Christoph; Westermann, Rüdiger; Preim, Bernhard; Holly Rushmeier and Oliver DeussenSurface models derived from medical image data often exhibit artefacts, such as noise and staircases, which can be reduced by applying mesh smoothing filters. Usually, an iterative adaption of smoothing parameters to the specific data and continuous re‐evaluation of accuracy and curvature is required. Depending on the number of vertices and the filter algorithm, computation time may vary strongly and interfere with an interactive mesh generation procedure. In this paper, we present an approach to improve the handling of mesh smoothing filters. Based on a GPU mesh smoothing implementation of uniform and anisotropic filters, model quality is evaluated in real‐time and provided to the user to support the mental optimization of input parameters. This is achieved by means of quality graphs and quality bars. Moreover, this framework is used to find appropriate smoothing parameters automatically and to provide data‐specific parameter suggestions. These suggestions are employed to generate a preview gallery with different smoothing suggestions. The preview functionality is additionally used for the inspection of specific artefacts and their possible reduction with different parameter sets.Surface models derived from medical image data often exhibit artifacts, such as noise and staircases, which can be reduced by applying mesh smoothing filters. Usually, an iterative adaption of smoothing parameters to the specific data and continuous re‐evaluation of accuracy and curvature is required. Depending on the number of vertices and the filter algorithm, computation time may vary strongly and interfere with an interactive mesh generation procedure. In this paper, we present an approach to improve the handling of mesh smoothing filters. Based on a GPU mesh smoothing implementation of uniform and anisotropic filters, model quality is evaluated in real‐time and provided to the user to support the mental optimization of input parameters. This is achieved by means of quality graphs and quality bars.Item Issue Information(The Eurographics Association and Blackwell Publishing Ltd., 2013) Holly Rushmeier and Oliver DeussenItem Massively Parallel Hierarchical Scene Processing with Applications in Rendering(The Eurographics Association and Blackwell Publishing Ltd., 2013) Vinkler, Marek; Bittner, Jiří; Havran, Vlastimil; Hapala, Michal; Holly Rushmeier and Oliver DeussenWe present a novel method for massively parallel hierarchical scene processing on the GPU, which is based on sequential decomposition of the given hierarchical algorithm into small functional blocks. The computation is fully managed by the GPU using a specialized task pool which facilitates synchronization and communication of processing units. We present two applications of the proposed approach: construction of the bounding volume hierarchies and collision detection based on divide‐and‐conquer ray tracing. The results indicate that using our approach we achieve high utilization of the GPU even for complex hierarchical problems which pose a challenge for massive parallelization. The results indicate that using our approach we achieve high utilization of the GPU even for complex hierarchical problems which pose a challenge for massive parallelization.We present a novel method for massively parallel hierarchical scene processing on the GPU, which is based on sequential decomposition of the given hierarchical algorithm into small functional blocks. The computation is fully managed by the GPU using a specialized task pool which facilitates synchronization and communication of processing units. We present two applications of the proposed approach: construction of the bounding volume hierarchies and collision detection based on divide‐and‐conquer ray tracingItem Measuring Privacy and Utility in Privacy-Preserving Visualization(The Eurographics Association and Blackwell Publishing Ltd., 2013) Dasgupta, Aritra; Chen, Min; Kosara, Robert; Holly Rushmeier and Oliver DeussenIn previous work, we proposed a technique for preserving the privacy of quasi-identifiers in sensitive data when visualized using parallel coordinates. This paper builds on that work by introducing a number of metrics that can be used to assess both the level of privacy and the amount of utility that can be gained from the resulting visualizations. We also generalize our approach beyond parallel coordinates to scatter plots and other visualization techniques. Privacy preservation generally entails a trade‐off between privacy and utility: the more the data are protected, the less useful the visualization. Using a visually‐oriented approach, we can provide a higher amount of utility than directly applying data anonymization techniques used in data mining. To demonstrate this, we use the visual uncertainty framework for systematically defining metrics based on cluster artifacts and information theoretic principles. In a case study, we demonstrate the effectiveness of our technique as compared to standard data‐based clustering in the context of privacy‐preserving visualization.In previous work, we proposed a technique for preserving the privacy of quasi‐identifiers in sensitive data when visualized using parallel coordinates. This paper builds on that work by introducing a number of metrics that can be used to assess both the level of privacy and the amount of utility that can be gained from the resulting visualizations. We also generalize our approach beyond parallel coordinates to scatter plots and other visualization techniques.Item Modelling Bending Behaviour in Cloth Simulation Using Hysteresis(The Eurographics Association and Blackwell Publishing Ltd., 2013) Wong, T. H.; Leach, G.; Zambetta, F.; Holly Rushmeier and Oliver DeussenReal cloth exhibits bending effects, such as residual curvatures and permanent wrinkles. These are typically explained by bending plastic deformation due to internal friction in the fibre and yarn structure. Internal friction also gives rise to energy dissipation which significantly affects cloth dynamic behaviour. In textile research, hysteresis is used to analyse these effects, and can be modelled using complex friction terms at the fabric geometric structure level. The hysteresis loop is central to the modelling and understanding of elastic and inelastic (plastic) behaviour, and is often measured as a physical characteristic to analyse and predict fabric behaviour. However, in cloth simulation in computer graphics the use of hysteresis to capture these effects has not been reported so far. Existing approaches have typically used plasticity models for simulating plastic deformation. In this paper, we report on our investigation into experiments using a simple mathematical approximation to an ideal hysteresis loop at a high level to capture the previously mentioned effects. Fatigue weakening effects during repeated flexural deformation are also considered based on the hysteresis model. Comparisons with previous bending models and plasticity methods are provided to point out differences and advantages. The method requires only incremental extra computation time.Real cloth exhibits bending effects such as residual curvatures and permanent wrinkles. These are typically explained by bending plastic deformation due to internal friction in the fibre and yarn structure. Internal friction also gives rise to energy dissipation which significantly affects cloth dynamic behaviour. In textile research hysteresis is used to analyse these effects, and can be modelled using complex friction terms at the fabric geometric structure level. The hysteresis loop is central to the modelling and understanding of elastic and inelastic (plastic) behaviour, and is often measured as a physical characteristic to analyse and predict fabric behaviour. However, in cloth simulation in computer graphics the use of hysteresis to capture these effects has not been reported so far. Existing approaches have typically used plasticity models for simulating plastic deformation. In this paper we report on our investigation into experiments using a simple mathematical approximation to an ideal hysteresis loop at a high level to capture the previously mentioned effects. Fatigue weakening effects during repeated flexural deformation are also considered based on the hysteresis model. Comparisons with previous bending models and plasticity methods are provided to point out differences and advantages. The method requires only incremental extra computation time.Item Motion Synthesis for Sports Using Unobtrusive Lightweight Body‐Worn and Environment Sensing(The Eurographics Association and Blackwell Publishing Ltd., 2013) Kelly, P.; Conaire, C. Ó; O'Connor, N. E.; Hodgins, J.; Holly Rushmeier and Oliver DeussenThe ability to accurately achieve performance capture of athlete motion during competitive play in near real‐time promises to revolutionize not only broadcast sports graphics visualization and commentary, but also potentially performance analysis, sports medicine, fantasy sports and wagering. In this paper, we present a highly portable, non‐intrusive approach for synthesizing human athlete motion in competitive game‐play with lightweight instrumentation of both the athlete and field of play. Our data‐driven puppetry technique relies on a pre‐captured database of short segments of motion capture data to construct a motion graph augmented with interpolated motions and speed variations. An athlete's performed motion is synthesized by finding a related action sequence through the motion graph using a sparse set of measurements from the performance, acquired from both worn inertial and global location sensors. We demonstrate the efficacy of our approach in a challenging application scenario, with a high‐performance tennis athlete wearing one or more lightweight body‐worn accelerometers and a single overhead camera providing the athlete's global position and orientation data. However, the approach is flexible in both the number and variety of input sensor data used. The technique can also be adopted for searching a motion graph efficiently in linear time in alternative applications.The ability to accurately achieve performance capture of athlete motion during competitive play in near real‐time promises to revolutionise not only broadcast sports graphics visualisation and commentary, but also potentially performance analysis, sports medicine, fantasy sports and wagering. In this paper, we present a highly portable, non‐intrusive approach for synthesising human athlete motion in competitive game‐play with lightweight instrumentation of both the athlete and field of play. Our data‐driven puppetry technique relies on a pre‐captured database of short segments of motion capture data to construct a motion graph augmented with interpolated motions and speed variations. An athlete's performed motion is synthesised by finding a related action sequence through the motion graph using a sparse set of measurements from the performance, acquired from both worn inertial and global location sensors.Item Multiple Light Source Estimation in a Single Image(The Eurographics Association and Blackwell Publishing Ltd., 2013) Lopez-Moreno, Jorge; Garces, Elena; Hadap, Sunil; Reinhard, Erik; Gutierrez, Diego; Holly Rushmeier and Oliver DeussenMany high‐level image processing tasks require an estimate of the positions, directions and relative intensities of the light sources that illuminated the depicted scene. In image‐based rendering, augmented reality and computer vision, such tasks include matching image contents based on illumination, inserting rendered synthetic objects into a natural image, intrinsic images, shape from shading and image relighting. Yet, accurate and robust illumination estimation, particularly from a single image, is a highly ill‐posed problem. In this paper, we present a new method to estimate the illumination in a single image as a combination of achromatic lights with their 3D directions and relative intensities. In contrast to previous methods, we base our azimuth angle estimation on curve fitting and recursive refinement of the number of light sources. Similarly, we present a novel surface normal approximation using an osculating arc for the estimation of zenith angles. By means of a new data set of ground‐truth data and images, we demonstrate that our approach produces more robust and accurate results, and show its versatility through novel applications such as image compositing and analysis.Many high‐level image processing tasks require an estimate of the positions, directions and relative intensities of the light sources that illuminated the depicted scene. In image‐based rendering, augmented reality and computer vision, such tasks include matching image contents based on illumination, inserting rendered synthetic objects into a natural image, intrinsic images, shape from shading and image relighting. Yet, accurate and robust illumination estimation, particularly from a single image, is a highly ill‐posed problem. In this paper, we present a new method to estimate the illumination in a single image as a combination of achromatic lights with their 3D directions and relative intensities. In contrast to previous methods, we base our azimuth angle estimation on curve fitting and recursive refinement of the number of light sources. Likewise, we present a novel surface normal approximation using an osculating arc for the estimation of zenith angles.Item Non-Oriented MLS Gradient Fields(The Eurographics Association and Blackwell Publishing Ltd., 2013) Chen, Jiazhou; Guennebaud, Gaël; Barla, Pascal; Granier, Xavier; Holly Rushmeier and Oliver DeussenWe introduce a new approach for defining continuous non-oriented gradient fields from discrete inputs, a fundamental stage for a variety of computer graphics applications such as surface or curve reconstruction, and image stylization. Our approach builds on a moving least square formalism that computes higher‐order local approximations of non‐oriented input gradients. In particular, we show that our novel isotropic linear approximation outperforms its lower‐order alternative: surface or image structures are much better preserved, and instabilities are significantly reduced. Thanks to its ease of implementation (on both CPU and GPU) and small performance overhead, we believe our approach will find a widespread use in graphics applications, as demonstrated by the variety of our results.We introduce a new approach for defining continuous non‐oriented gradient fields from discrete inputs, a fundamental stage for a variety of computer graphics applications such as surface or curve reconstruction, and image stylization. Our approach builds on a moving least square formalism that computes higher‐order local approximations of non‐oriented input gradients. In particular, we show that our novel isotropic linear approximation outperforms its lower‐order alternative: surface or image structures are much better preserved, and instabilities are significantly reduced.Item Non‐Local Image Reconstruction for Efficient Computation of Synthetic Bidirectional Texture Functions(The Eurographics Association and Blackwell Publishing Ltd., 2013) Schröder, K.; Klein, R.; Zinke, A.; Holly Rushmeier and Oliver DeussenVisual prototyping of materials is relevant for many computer graphics applications. A large amount of modelling flexibility can be obtained by directly rendering micro‐geometry. While this is possible in principle, it is usually computationally expensive. Recently, bidirectional texture functions (BTFs) have become popular for efficient photorealistic rendering of surfaces. We propose an efficient system for the computation of synthetic BTFs using Monte Carlo path tracing of micro‐geometry. We observe that BTFs usually consist of many similar apparent bidirectional reflectance distribution functions. By exploiting structural similarity we can reduce rendering times by one order of magnitude. This is done in a process we call non‐local image reconstruction, which has been inspired by non‐local means filtering. Our results indicate that synthesizing BTFs is highly practical and may currently only take a few minutes for BTFs with 70 × 70 viewing and lighting directions and 128 × 128 pixels.Non‐Local Image Reconstruction for Efficient Computation of Synthetic Bidirectional Texture Functions Kai Schröder, Reinhard Klein, Arno Zinke We propose an efficient system for the synthesis of BTFs using Monte Carlo path tracing of micro‐geometry. By exploiting structural similarity we can reduce rendering times by one order of magnitude. This is done in a process we call non‐local image reconstruction, which has been inspired by non‐local means filtering.Item Patch-Collaborative Spectral Point-Cloud Denoising(The Eurographics Association and Blackwell Publishing Ltd., 2013) Rosman, G.; Dubrovina, A.; Kimmel, R.; Holly Rushmeier and Oliver DeussenWe present a new framework for point cloud denoising by patch-collaborative spectral analysis. A collaborative generalization of each surface patch is defined, combining similar patches from the denoised surface. The Laplace–Beltrami operator of the collaborative patch is then used to selectively smooth the surface in a robust manner that can gracefully handle high levels of noise, yet preserves sharp surface features. The resulting denoising algorithm competes favourably with state‐of‐the‐art approaches, and extends patch‐based algorithms from the image processing domain to point clouds of arbitrary sampling. We demonstrate the accuracy and noise‐robustness of the proposed algorithm on standard benchmark models as well as range scans, and compare it to existing methods for point cloud denoising.We present a new framework for point cloud denoising by patch‐collaborative spectral analysis. A collaborative generalization of each surface patch is defined, combining similar patches from the denoised surface. The Laplace‐Beltrami operator of the collaborative patch is then used to selectively smooth the surface in a robust manner that can gracefully handle high levels of noise, yet preserves sharp surface features.Item Reviewers(The Eurographics Association and Blackwell Publishing Ltd., 2013) -; Holly Rushmeier and Oliver DeussenItem SCALe-invariant Integral Surfaces(The Eurographics Association and Blackwell Publishing Ltd., 2013) Zanni, C.; Bernhardt, A.; Quiblier, M.; Cani, M.-P.; Holly Rushmeier and Oliver DeussenExtraction of skeletons from solid shapes has attracted quite a lot of attention, but less attention was paid so far to the reverse operation: generating smooth surfaces from skeletons and local radius information. Convolution surfaces, i.e. implicit surfaces generated by integrating a smoothing kernel along a skeleton, were developed to do so. However, they failed to reconstruct prescribed radii and were unable to model large shapes with fine details. This work introduces SCALe‐invariant Integral Surfaces (SCALIS), a new paradigm for implicit modelling from skeleton graphs. Similarly to convolution surfaces, our new surfaces still smoothly blend when field contributions from new skeleton parts are added. However, in contrast with convolution surfaces, blending properties are scale‐invariant. This brings three major benefits: the radius of the surface around a skeleton can be explicitly controlled, shapes generated in blending regions are self‐similar regardless of the scale of the model and thin shape components are not excessively smoothed out when blended into larger ones.Extraction of skeletons from solid shapes has attracted quite a lot of attention, but less attention was paid so far to the reverse operation: generating smooth surfaces from skeletons and local radius information. Convolution surfaces, i.e. implicit surfaces generated by integrating a smoothing kernel along a skeleton, were developed to do so. However, they failed to reconstruct prescribed radii and were unable to model large shapes with fine details. This work introduces SCALe‐invariant Integral Surfaces (SCALIS), a new paradigm for implicit modeling from skeleton graphs. Similarly to convolution surfaces, our new surfaces still smoothly blend when field contributions from new skeleton parts are added. However, in contrast with convolution surfaces, blending properties are scale invariant.