32-Issue 8
Permanent URI for this collection
Browse
Browsing 32-Issue 8 by Issue Date
Now showing 1 - 20 of 24
Results Per Page
Sort Options
Item Issue Information(The Eurographics Association and Blackwell Publishing Ltd., 2013) Holly Rushmeier and Oliver DeussenItem Efficient Interpolation of Articulated Shapes Using Mixed Shape Spaces(The Eurographics Association and Blackwell Publishing Ltd., 2013) Marras, S.; Cashman, T. J.; Hormann, K.; Holly Rushmeier and Oliver DeussenInterpolation between compatible triangle meshes that represent different poses of some object is a fundamental operation in geometry processing. A common approach is to consider the static input shapes as points in a suitable shape space and then use simple linear interpolation in this space to find an interpolated shape. In this paper, we present a new interpolation technique that is particularly tailored for meshes that represent articulated shapes. It is up to an order of magnitude faster than state‐of‐the‐art methods and gives very similar results. To achieve this, our approach introduces a novel shape space that takes advantage of the underlying structure of articulated shapes and distinguishes between rigid parts and non‐rigid joints. This allows us to use fast vertex interpolation on the rigid parts and resort to comparatively slow edge‐based interpolation only for the joints.Interpolation between compatible triangle meshes that represent different poses of some object is a fundamental operation in geometry processing. A common approach is to consider the static input shapes as points in a suitable shape space and then use simple linear interpolation in this space to find an interpolated shape. In this paper, we present a new interpolation technique that is particularly tailored for meshes that represent articulated shapes. It is up to an order of magnitude faster than state‐of‐the‐art methods and gives very similar results. To achieve this, our approach introduces a novel shape space that takes advantage of the underlying structure of articulated shapes and distinguishes between rigid parts and non‐rigid joints.Item Spherical Fibonacci Point Sets for Illumination Integrals(The Eurographics Association and Blackwell Publishing Ltd., 2013) Marques, R.; Bouville, C.; Ribardière, M.; Santos, L. P.; Bouatouch, K.; Holly Rushmeier and Oliver DeussenQuasi-Monte Carlo (QMC) methods exhibit a faster convergence rate than that of classic Monte Carlo methods. This feature has made QMC prevalent in image synthesis, where it is frequently used for approximating the value of spherical integrals (e.g. illumination integral). The common approach for generating QMC sampling patterns for spherical integration is to resort to unit square low‐discrepancy sequences and map them to the hemisphere. However such an approach is suboptimal as these sequences do not account for the spherical topology and their discrepancy properties on the unit square are impaired by the spherical projection. In this paper we present a strategy for producing high‐quality QMC sampling patterns for spherical integration by resorting to spherical Fibonacci point sets. We show that these patterns, when applied to illumination integrals, are very simple to generate and consistently outperform existing approaches, both in terms of root mean square error (RMSE) and image quality. Furthermore, only a single pattern is required to produce an image, thanks to a scrambling scheme performed directly in the spherical domain.Quasi‐Monte Carlo (QMC) methods exhibit a faster convergence rate than that of classic Monte Carlo methods. This feature has made QMC prevalent in image synthesis, where it is frequently used for approximating the value of spherical integrals (e.g. illumination integral). The common approach for generating QMC sampling patterns for spherical integration is to resort to unit square low‐discrepancy sequences and map them to the hemisphere. However such an approach is suboptimal as these sequences do not account for the spherical topology and their discrepancy properties on the unit square are impaired by the spherical projection.Item Customizable LoD for Procedural Architecture(The Eurographics Association and Blackwell Publishing Ltd., 2013) Besuievsky, Gonzalo; Patow, Gustavo; Holly Rushmeier and Oliver DeussenThis paper presents a new semantic and procedural level-of-detail (LoD) method applicable to any rule‐based procedural building definition. This new LoD system allows the customizable and flexible selection of the architectural assets to simplify, doing it in an efficient and artist‐transparent way. The method, based on an extension of traditional grammars, uses LoD‐oriented commands. A graph‐rewriting process introduces these new commands in the artist‐provided rule set, which allows to select different simplification criteria (distance, screen‐size projection, semantic selection or any arbitrary method) through a scripting interface, according to user needs. This way we define a flexible, customizable and efficient procedural LoD system, which generates buildings directly with the correct LoD for a given set of viewing and semantic conditions.This paper presents a new semantic and procedural level‐of‐Detail (LoD) method applicable to any rule‐based procedural building definition. This new LoD system allows the customizable and flexible selection of the architectural assets to simplify, doing it in an efficient and artist‐transparent way. The method, based on an extension of traditional grammars, uses LoD‐oriented commands. A graph‐rewriting process introduces these new commands in the artist‐provided rule set, which allows to select different simplification criteria (distance, screen‐size projection, semantic selection, or any arbitrary method) through a scripting interface, according to user needs.Item Fast Shadow Removal Using Adaptive Multi‐Scale Illumination Transfer(The Eurographics Association and Blackwell Publishing Ltd., 2013) Xiao, Chunxia; She, Ruiyun; Xiao, Donglin; Ma, Kwan-Liu; Holly Rushmeier and Oliver DeussenIn this paper, we present a new method for removing shadows from images. First, shadows are detected by interactive brushing assisted with a Gaussian Mixture Model. Secondly, the detected shadows are removed using an adaptive illumination transfer approach that accounts for the reflectance variation of the image texture. The contrast and noise levels of the result are then improved with a multi‐scale illumination transfer technique. Finally, any visible shadow boundaries in the image can be eliminated based on our Bayesian framework. We also extend our method to video data and achieve temporally consistent shadow‐free results.In this paper, we present a new method for removing shadows from images. First, shadows are detected by interactive brushing assisted with a Gaussian Mixture Model. Second, the detected shadows are removed using an adaptive illumination transfer approach that accounts for the reflectance variation of the image texture. The contrast and noise levels of the result are then improved with a multi‐scale illumination transfer technique. Finally, any visible shadow boundaries in the image can be eliminated based on our Bayesian framework. We also extend our method to video data and achieve temporally consistent shadow free results.Item Sketch-Based Editing Tools for Tumour Segmentation in 3D Medical Images(The Eurographics Association and Blackwell Publishing Ltd., 2013) Heckel, Frank; Moltz, Jan H.; Tietjen, Christian; Hahn, Horst K.; Holly Rushmeier and Oliver DeussenIn the past years sophisticated automatic segmentation algorithms for various medical image segmentation problems have been developed. However, there are always cases where automatic algorithms fail to provide an acceptable segmentation. In these cases the user needs efficient segmentation editing tools, a problem which has not received much attention in research. We give a comprehensive overview on segmentation editing for three‐dimensional (3D) medical images. For segmentation editing in two‐dimensional (2D) images, we discuss a sketch‐based approach where the user modifies the segmentation in the contour domain. Based on this 2D interface, we present an image‐based as well as an image‐independent method for intuitive and efficient segmentation editing in 3D in the context of tumour segmentation in computed tomography (CT). Our editing tools have been evaluated on a database containing 1226 representative liver metastases, lung nodules and lymph nodes of different shape, size and image quality. In addition, we have performed a qualitative evaluation with radiologists and technical experts, proving the efficiency of our tools.In the past years sophisticated automatic segmentation algorithms for various medical image segmentation problems have been developed. However, there are always cases where automatic algorithms fail to provide an acceptable segmentation. In these cases the user needs efficient segmentation editing tools, a problem which has not received much attention in research. We give a comprehensive overview on segmentation editing for 3D medical images. For segmentation editing in 2D, we discuss a sketch‐based approach where the user modifies the segmentation in the contour domain. Based on this 2D interface, we present an image‐based as well as an image‐independent method for intuitive and efficient segmentation editing in 3D in the context of tumour segmentation in CT.Item Reviewers(The Eurographics Association and Blackwell Publishing Ltd., 2013) -; Holly Rushmeier and Oliver DeussenItem Non‐Local Image Reconstruction for Efficient Computation of Synthetic Bidirectional Texture Functions(The Eurographics Association and Blackwell Publishing Ltd., 2013) Schröder, K.; Klein, R.; Zinke, A.; Holly Rushmeier and Oliver DeussenVisual prototyping of materials is relevant for many computer graphics applications. A large amount of modelling flexibility can be obtained by directly rendering micro‐geometry. While this is possible in principle, it is usually computationally expensive. Recently, bidirectional texture functions (BTFs) have become popular for efficient photorealistic rendering of surfaces. We propose an efficient system for the computation of synthetic BTFs using Monte Carlo path tracing of micro‐geometry. We observe that BTFs usually consist of many similar apparent bidirectional reflectance distribution functions. By exploiting structural similarity we can reduce rendering times by one order of magnitude. This is done in a process we call non‐local image reconstruction, which has been inspired by non‐local means filtering. Our results indicate that synthesizing BTFs is highly practical and may currently only take a few minutes for BTFs with 70 × 70 viewing and lighting directions and 128 × 128 pixels.Non‐Local Image Reconstruction for Efficient Computation of Synthetic Bidirectional Texture Functions Kai Schröder, Reinhard Klein, Arno Zinke We propose an efficient system for the synthesis of BTFs using Monte Carlo path tracing of micro‐geometry. By exploiting structural similarity we can reduce rendering times by one order of magnitude. This is done in a process we call non‐local image reconstruction, which has been inspired by non‐local means filtering.Item Sketch-to-Design: Context-Based Part Assembly(The Eurographics Association and Blackwell Publishing Ltd., 2013) Xie, Xiaohua; Xu, Kai; Mitra, Niloy J.; Cohen-Or, Daniel; Gong, Wenyong; Su, Qi; Chen, Baoquan; Holly Rushmeier and Oliver DeussenDesigning 3D objects from scratch is difficult, especially when the user intent is fuzzy and lacks a clear target form. We facilitate design by providing reference and inspiration from existing model contexts. We rethink model design as navigating through different possible combinations of part assemblies based on a large collection of pre‐segmented 3D models. We propose an interactive sketch‐to‐design system, where the user sketches prominent features of parts to combine. The sketched strokes are analysed individually, and more importantly, in context with the other parts to generate relevant shape suggestions via adesign galleryinterface. As a modelling session progresses and more parts get selected, contextual cues become increasingly dominant, and the model quickly converges to a final form. As a key enabler, we use pre‐learned part‐based contextual information to allow the user to quickly explore different combinations of parts. Our experiments demonstrate the effectiveness of our approach for efficiently designing new variations from existing shape collections.Designing 3D objects from scratch is difficult, especially when the user intent is fuzzy and lacks a clear target form. We facilitate design by providing reference and inspiration from existing model contexts. We rethink model design as navigating through different possible combinations of part assemblies based on a large collection of pre‐segmented 3D models. We propose an interactive sketch‐to‐design system, where the user sketches prominent features of parts to combine. The sketched strokes are analyzed individually, and more importantly, in context with the other parts to generate relevant shape suggestions via a design gallery interface. As a modeling session progresses and more parts get selected, contextual cues become increasingly dominant, and the model quickly converges to a final form. As a key enabler, we use pre‐learned part‐based contextual information to allow the user to quickly explore different combinations of parts.Item Visual Analysis of Multi‐Dimensional Categorical Data Sets(The Eurographics Association and Blackwell Publishing Ltd., 2013) Broeksema, Bertjan; Telea, Alexandru C.; Baudel, Thomas; Holly Rushmeier and Oliver DeussenWe present a set of interactive techniques for the visual analysis of multi‐dimensional categorical data. Our approach is based on multiple correspondence analysis (MCA), which allows one to analyse relationships, patterns, trends and outliers among dependent categorical variables. We use MCA as a dimensionality reduction technique to project both observations and their attributes in the same 2D space. We use a treeview to show attributes and their domains, a histogram of their representativity in the data set and as a compact overview of attribute‐related facts. A second view shows both attributes and observations. We use a Voronoi diagram whose cells can be interactively merged to discover salient attributes, cluster values and bin categories. Bar chart legends help assigning meaning to the 2D view axes and 2D point clusters. We illustrate our techniques with real‐world application data.We present a set of interactive techniques for the visual analysis of multidimensional categorical data. Our approach is based on Multiple Correspondence Analysis (MCA), which allows one to analyze relationships, patterns, trends and outliers among dependent categorical variables. We use MCA as a dimensionality reduction technique to project both observations and their attributes in the same 2D space. We use a treeview to show attributes and their domains, a histogram of their representativity in the data set, and as a compact overview of attribute‐related facts. A second view shows both attributes and observations.Item Measuring Privacy and Utility in Privacy-Preserving Visualization(The Eurographics Association and Blackwell Publishing Ltd., 2013) Dasgupta, Aritra; Chen, Min; Kosara, Robert; Holly Rushmeier and Oliver DeussenIn previous work, we proposed a technique for preserving the privacy of quasi-identifiers in sensitive data when visualized using parallel coordinates. This paper builds on that work by introducing a number of metrics that can be used to assess both the level of privacy and the amount of utility that can be gained from the resulting visualizations. We also generalize our approach beyond parallel coordinates to scatter plots and other visualization techniques. Privacy preservation generally entails a trade‐off between privacy and utility: the more the data are protected, the less useful the visualization. Using a visually‐oriented approach, we can provide a higher amount of utility than directly applying data anonymization techniques used in data mining. To demonstrate this, we use the visual uncertainty framework for systematically defining metrics based on cluster artifacts and information theoretic principles. In a case study, we demonstrate the effectiveness of our technique as compared to standard data‐based clustering in the context of privacy‐preserving visualization.In previous work, we proposed a technique for preserving the privacy of quasi‐identifiers in sensitive data when visualized using parallel coordinates. This paper builds on that work by introducing a number of metrics that can be used to assess both the level of privacy and the amount of utility that can be gained from the resulting visualizations. We also generalize our approach beyond parallel coordinates to scatter plots and other visualization techniques.Item Non-Oriented MLS Gradient Fields(The Eurographics Association and Blackwell Publishing Ltd., 2013) Chen, Jiazhou; Guennebaud, Gaël; Barla, Pascal; Granier, Xavier; Holly Rushmeier and Oliver DeussenWe introduce a new approach for defining continuous non-oriented gradient fields from discrete inputs, a fundamental stage for a variety of computer graphics applications such as surface or curve reconstruction, and image stylization. Our approach builds on a moving least square formalism that computes higher‐order local approximations of non‐oriented input gradients. In particular, we show that our novel isotropic linear approximation outperforms its lower‐order alternative: surface or image structures are much better preserved, and instabilities are significantly reduced. Thanks to its ease of implementation (on both CPU and GPU) and small performance overhead, we believe our approach will find a widespread use in graphics applications, as demonstrated by the variety of our results.We introduce a new approach for defining continuous non‐oriented gradient fields from discrete inputs, a fundamental stage for a variety of computer graphics applications such as surface or curve reconstruction, and image stylization. Our approach builds on a moving least square formalism that computes higher‐order local approximations of non‐oriented input gradients. In particular, we show that our novel isotropic linear approximation outperforms its lower‐order alternative: surface or image structures are much better preserved, and instabilities are significantly reduced.Item SCALe-invariant Integral Surfaces(The Eurographics Association and Blackwell Publishing Ltd., 2013) Zanni, C.; Bernhardt, A.; Quiblier, M.; Cani, M.-P.; Holly Rushmeier and Oliver DeussenExtraction of skeletons from solid shapes has attracted quite a lot of attention, but less attention was paid so far to the reverse operation: generating smooth surfaces from skeletons and local radius information. Convolution surfaces, i.e. implicit surfaces generated by integrating a smoothing kernel along a skeleton, were developed to do so. However, they failed to reconstruct prescribed radii and were unable to model large shapes with fine details. This work introduces SCALe‐invariant Integral Surfaces (SCALIS), a new paradigm for implicit modelling from skeleton graphs. Similarly to convolution surfaces, our new surfaces still smoothly blend when field contributions from new skeleton parts are added. However, in contrast with convolution surfaces, blending properties are scale‐invariant. This brings three major benefits: the radius of the surface around a skeleton can be explicitly controlled, shapes generated in blending regions are self‐similar regardless of the scale of the model and thin shape components are not excessively smoothed out when blended into larger ones.Extraction of skeletons from solid shapes has attracted quite a lot of attention, but less attention was paid so far to the reverse operation: generating smooth surfaces from skeletons and local radius information. Convolution surfaces, i.e. implicit surfaces generated by integrating a smoothing kernel along a skeleton, were developed to do so. However, they failed to reconstruct prescribed radii and were unable to model large shapes with fine details. This work introduces SCALe‐invariant Integral Surfaces (SCALIS), a new paradigm for implicit modeling from skeleton graphs. Similarly to convolution surfaces, our new surfaces still smoothly blend when field contributions from new skeleton parts are added. However, in contrast with convolution surfaces, blending properties are scale invariant.Item Modelling Bending Behaviour in Cloth Simulation Using Hysteresis(The Eurographics Association and Blackwell Publishing Ltd., 2013) Wong, T. H.; Leach, G.; Zambetta, F.; Holly Rushmeier and Oliver DeussenReal cloth exhibits bending effects, such as residual curvatures and permanent wrinkles. These are typically explained by bending plastic deformation due to internal friction in the fibre and yarn structure. Internal friction also gives rise to energy dissipation which significantly affects cloth dynamic behaviour. In textile research, hysteresis is used to analyse these effects, and can be modelled using complex friction terms at the fabric geometric structure level. The hysteresis loop is central to the modelling and understanding of elastic and inelastic (plastic) behaviour, and is often measured as a physical characteristic to analyse and predict fabric behaviour. However, in cloth simulation in computer graphics the use of hysteresis to capture these effects has not been reported so far. Existing approaches have typically used plasticity models for simulating plastic deformation. In this paper, we report on our investigation into experiments using a simple mathematical approximation to an ideal hysteresis loop at a high level to capture the previously mentioned effects. Fatigue weakening effects during repeated flexural deformation are also considered based on the hysteresis model. Comparisons with previous bending models and plasticity methods are provided to point out differences and advantages. The method requires only incremental extra computation time.Real cloth exhibits bending effects such as residual curvatures and permanent wrinkles. These are typically explained by bending plastic deformation due to internal friction in the fibre and yarn structure. Internal friction also gives rise to energy dissipation which significantly affects cloth dynamic behaviour. In textile research hysteresis is used to analyse these effects, and can be modelled using complex friction terms at the fabric geometric structure level. The hysteresis loop is central to the modelling and understanding of elastic and inelastic (plastic) behaviour, and is often measured as a physical characteristic to analyse and predict fabric behaviour. However, in cloth simulation in computer graphics the use of hysteresis to capture these effects has not been reported so far. Existing approaches have typically used plasticity models for simulating plastic deformation. In this paper we report on our investigation into experiments using a simple mathematical approximation to an ideal hysteresis loop at a high level to capture the previously mentioned effects. Fatigue weakening effects during repeated flexural deformation are also considered based on the hysteresis model. Comparisons with previous bending models and plasticity methods are provided to point out differences and advantages. The method requires only incremental extra computation time.Item An Algorithm for Random Fractal Filling of Space(The Eurographics Association and Blackwell Publishing Ltd., 2013) Shier, John; Bourke, Paul; Holly Rushmeier and Oliver DeussenComputational experiments with a simple algorithm show that it is possible to fill any spatial region with a random fractalization of any shape, with a continuous range of pre‐specified fractal dimensions D. The algorithm is presented here in 1, 2 or 3 physical dimensions. The size power‐law exponent c or the fractal dimension D can be specified ab initio over a substantial range. The method creates an infinite set of shapes whose areas (lengths, volumes) obey a power law and sum to the area (length and volume) to be filled. The algorithm begins by randomly placing the largest shape and continues using random search to place each smaller shape where it does not overlap or touch any previously placed shape. The resulting gasket is a single connected object.Computational experiments with a simple algorithm show that it is possible to fill any spatial region with a random fractalization Q1 of any shape, with a continuous range of pre‐specified fractal dimensions D. The algorithm is presented here in 1, 2 or 3 physical dimensions. The size power‐law exponent c or the fractal dimension D can be specified ab initio over a substantial range. The method creates an infinite set of shapes whose areas (lengths, volumes) obey a power law and sum to the area (length and volume) to be filled.Item Motion Synthesis for Sports Using Unobtrusive Lightweight Body‐Worn and Environment Sensing(The Eurographics Association and Blackwell Publishing Ltd., 2013) Kelly, P.; Conaire, C. Ó; O'Connor, N. E.; Hodgins, J.; Holly Rushmeier and Oliver DeussenThe ability to accurately achieve performance capture of athlete motion during competitive play in near real‐time promises to revolutionize not only broadcast sports graphics visualization and commentary, but also potentially performance analysis, sports medicine, fantasy sports and wagering. In this paper, we present a highly portable, non‐intrusive approach for synthesizing human athlete motion in competitive game‐play with lightweight instrumentation of both the athlete and field of play. Our data‐driven puppetry technique relies on a pre‐captured database of short segments of motion capture data to construct a motion graph augmented with interpolated motions and speed variations. An athlete's performed motion is synthesized by finding a related action sequence through the motion graph using a sparse set of measurements from the performance, acquired from both worn inertial and global location sensors. We demonstrate the efficacy of our approach in a challenging application scenario, with a high‐performance tennis athlete wearing one or more lightweight body‐worn accelerometers and a single overhead camera providing the athlete's global position and orientation data. However, the approach is flexible in both the number and variety of input sensor data used. The technique can also be adopted for searching a motion graph efficiently in linear time in alternative applications.The ability to accurately achieve performance capture of athlete motion during competitive play in near real‐time promises to revolutionise not only broadcast sports graphics visualisation and commentary, but also potentially performance analysis, sports medicine, fantasy sports and wagering. In this paper, we present a highly portable, non‐intrusive approach for synthesising human athlete motion in competitive game‐play with lightweight instrumentation of both the athlete and field of play. Our data‐driven puppetry technique relies on a pre‐captured database of short segments of motion capture data to construct a motion graph augmented with interpolated motions and speed variations. An athlete's performed motion is synthesised by finding a related action sequence through the motion graph using a sparse set of measurements from the performance, acquired from both worn inertial and global location sensors.Item Multiple Light Source Estimation in a Single Image(The Eurographics Association and Blackwell Publishing Ltd., 2013) Lopez-Moreno, Jorge; Garces, Elena; Hadap, Sunil; Reinhard, Erik; Gutierrez, Diego; Holly Rushmeier and Oliver DeussenMany high‐level image processing tasks require an estimate of the positions, directions and relative intensities of the light sources that illuminated the depicted scene. In image‐based rendering, augmented reality and computer vision, such tasks include matching image contents based on illumination, inserting rendered synthetic objects into a natural image, intrinsic images, shape from shading and image relighting. Yet, accurate and robust illumination estimation, particularly from a single image, is a highly ill‐posed problem. In this paper, we present a new method to estimate the illumination in a single image as a combination of achromatic lights with their 3D directions and relative intensities. In contrast to previous methods, we base our azimuth angle estimation on curve fitting and recursive refinement of the number of light sources. Similarly, we present a novel surface normal approximation using an osculating arc for the estimation of zenith angles. By means of a new data set of ground‐truth data and images, we demonstrate that our approach produces more robust and accurate results, and show its versatility through novel applications such as image compositing and analysis.Many high‐level image processing tasks require an estimate of the positions, directions and relative intensities of the light sources that illuminated the depicted scene. In image‐based rendering, augmented reality and computer vision, such tasks include matching image contents based on illumination, inserting rendered synthetic objects into a natural image, intrinsic images, shape from shading and image relighting. Yet, accurate and robust illumination estimation, particularly from a single image, is a highly ill‐posed problem. In this paper, we present a new method to estimate the illumination in a single image as a combination of achromatic lights with their 3D directions and relative intensities. In contrast to previous methods, we base our azimuth angle estimation on curve fitting and recursive refinement of the number of light sources. Likewise, we present a novel surface normal approximation using an osculating arc for the estimation of zenith angles.Item Four‐Dimensional Geometry Lens: A Novel Volumetric Magnification Approach(The Eurographics Association and Blackwell Publishing Ltd., 2013) Li, Bo; Zhao, Xin; Qin, Hong; Holly Rushmeier and Oliver DeussenWe present a novel methodology that utilizes four-dimensional (4D) space deformation to simulate a magnification lens on versatile volume datasets and textured solid models. Compared with other magnification methods (e.g. geometric optics, mesh editing), 4D differential geometry theory and its practices are much more flexible and powerful for preserving shape features (i.e. minimizing angle distortion), and easier to adapt to versatile solid models. The primary advantage of 4D space lies at the following fact: we can now easily magnify the volume of regions of interest (ROIs) from the additional dimension, while keeping the rest region unchanged. To achieve this primary goal, we first embed a 3D volumetric input into 4D space and magnify ROIs in the fourth dimension. Then we flatten the 4D shape back into 3D space to accommodate other typical applications in the real 3D world. In order to enforce distortion minimization, in both steps we devise the high‐dimensional geometry techniques based on rigorous 4D geometry theory for 3D/4D mapping back and forth to amend the distortion. Our system can preserve not only focus region, but also context region and global shape. We demonstrate the effectiveness, robustness and efficacy of our framework with a variety of models ranging from tetrahedral meshes to volume datasets.Item An Efficient Algorithm for Determining an Aesthetic Shape Connecting Unorganized 2D Points(The Eurographics Association and Blackwell Publishing Ltd., 2013) Ohrhallinger, S.; Mudur, S.; Holly Rushmeier and Oliver DeussenWe present anefficient algorithm for determining an aesthetically pleasing shape boundary connecting all the points in a given unorganized set of 2D points, with no other information than point coordinates. By posing shape construction as a minimisation problem which follows the Gestalt laws, our desired shape Bmin is non‐intersecting, interpolates all points and minimizes a criterion related to these laws. The basis for our algorithm is an initial graph, an extension of the Euclidean minimum spanning tree but with no leaf nodes, called as the minimum boundary complex BCmin. BCmin and Bmin can be expressed similarly by parametrizing a topological constraint. A close approximation of BCmin, termed BC0 can be computed fast using a greedy algorithm. BC0 is then transformed into a closed interpolating boundary Bout in two steps to satisfy Bmin’s topological and minimization requirements. Computing Bmin exactly is an NP (Non‐Polynomial)‐hard problem, whereas Bout is computed in linearithmic time. We present many examples showing considerable improvement over previous techniques, especially for shapes with sharp corners. Source code is available online.We present an efficient algorithm for determining an aesthetically pleasing shape boundary connecting all the points in a given unorganised set of 2D points, with no other information than point coordinates. By posing shape construction as a minimisation problem which follows the Gestalt laws, our desired shape Bmin is non‐intersecting, interpolates all points and minimises a criterion related to these laws. The basis for our algorithm is an initial graph, an extension of the Euclidean minimum spanning tree but with no leaf nodes, called as the minimum boundary complex BCmin. BCmin and Bmin can be expressed similarly by parametrising a topological constraint. A close approximation of BCmin, termed BC0 can be computed fast using a greedy algorithm.Item Patch-Collaborative Spectral Point-Cloud Denoising(The Eurographics Association and Blackwell Publishing Ltd., 2013) Rosman, G.; Dubrovina, A.; Kimmel, R.; Holly Rushmeier and Oliver DeussenWe present a new framework for point cloud denoising by patch-collaborative spectral analysis. A collaborative generalization of each surface patch is defined, combining similar patches from the denoised surface. The Laplace–Beltrami operator of the collaborative patch is then used to selectively smooth the surface in a robust manner that can gracefully handle high levels of noise, yet preserves sharp surface features. The resulting denoising algorithm competes favourably with state‐of‐the‐art approaches, and extends patch‐based algorithms from the image processing domain to point clouds of arbitrary sampling. We demonstrate the accuracy and noise‐robustness of the proposed algorithm on standard benchmark models as well as range scans, and compare it to existing methods for point cloud denoising.We present a new framework for point cloud denoising by patch‐collaborative spectral analysis. A collaborative generalization of each surface patch is defined, combining similar patches from the denoised surface. The Laplace‐Beltrami operator of the collaborative patch is then used to selectively smooth the surface in a robust manner that can gracefully handle high levels of noise, yet preserves sharp surface features.