35-Issue 1
Permanent URI for this collection
Browse
Browsing 35-Issue 1 by Issue Date
Now showing 1 - 20 of 24
Results Per Page
Sort Options
Item Issue Information ‐ TOC(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Min and Zhang, Hao (Richard)Item A Survey of Geometric Analysis in Cultural Heritage(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Pintus, Ruggero; Pal, Kazim; Yang, Ying; Weyrich, Tim; Gobbetti, Enrico; Rushmeier, Holly; Chen, Min and Zhang, Hao (Richard)We present a review of recent techniques for performing geometric analysis in cultural heritage (CH) applications. The survey is aimed at researchers in the areas of computer graphics, computer vision and CH computing, as well as to scholars and practitioners in the CH field. The problems considered include shape perception enhancement, restoration and preservation support, monitoring over time, object interpretation and collection analysis. All of these problems typically rely on an understanding of the structure of the shapes in question at both a local and global level. In this survey, we discuss the different problem forms and review the main solution methods, aided by classification criteria based on the geometric scale at which the analysis is performed and the cardinality of the relationships among object parts exploited during the analysis. We finalize the report by discussing open problems and future perspectives.We present a review of recent techniques for performing geometric analysis in cultural heritage (CH) applications. The survey is aimed at researchers in the areas of computer graphics, computer vision and CH computing, as well as to scholars and practitioners in the CH field. The problems considered include shape perception enhancement, restoration and preservation support, monitoring over time, object interpretation and collection analysis. All of these problems typically rely on an understanding of the structure of the shapes in question at both a local and global level. In this survey, we discuss the different problem forms and review the main solution methods, aided by classification criteria based on the geometric scale at which the analysis is performed and the cardinality of the relationships among object parts exploited during the analysis. We finalize the report by discussing open problems and future perspectives.Item Mobile Surface Reflectometry(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Riviere, J.; Peers, P.; Ghosh, A.; Chen, Min and Zhang, Hao (Richard)We present two novel mobile reflectometry approaches for acquiring detailed spatially varying isotropic surface reflectance and mesostructure of a planar material sample using commodity mobile devices. The first approach relies on the integrated camera and flash pair present on typical mobile devices to support free‐form handheld acquisition of spatially varying rough specular material samples. The second approach, suited for highly specular samples, uses the LCD panel to illuminate the sample with polarized second‐order gradient illumination. To address the limited overlap of the front facing camera's view and the LCD illumination (and thus limited sample size), we propose a novel appearance transfer method that combines controlled reflectance measurement of a small exemplar section with uncontrolled reflectance measurements of the full sample under natural lighting. Finally, we introduce a novel surface detail enhancement method that adds fine scale surface mesostructure from close‐up observations under uncontrolled natural lighting. We demonstrate the accuracy and versatility of the proposed mobile reflectometry methods on a wide variety of spatially varying materials.We present two novel mobile reflectometry approaches for acquiring detailed spatially varying isotropic surface reflectance and mesostructure of a planar material sample using commodity mobile devices. The first approach relies on the integrated camera and flash pair present on typical mobile devices to support free‐form handheld acquisition of spatially varying rough specular material samples. The second approach, suited for highly specular samples, uses the LCD panel to illuminate the sample with polarized second‐order gradient illumination. To address the limited overlap of the front facing camera's view and the LCD illumination (and thus limited sample size), we propose a novel appearance transfer method that combines controlled reflectance measurement of a small exemplar section with uncontrolled reflectance measurements of the full sample under natural lighting. Finally, we introduce a novel surface detail enhancement method that adds fine scale surface mesostructure from close‐up observations under uncontrolled natural lighting. We demonstrate the accuracy and versatility of the proposed mobile reflectometry methods on a wide variety of spatially varying materials.Item Environmental Objects for Authoring Procedural Scenes(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Grosbellet, Francois; Peytavie, Adrien; Guérin, Éric; Galin, Éric; Mérillou, Stéphane; Benes, Bedrich; Chen, Min and Zhang, Hao (Richard)We propose a novel approach for authoring large scenes with automatic enhancement of objects to create geometric decoration details such as snow cover, icicles, fallen leaves, grass tufts or even trash. We introduce environmental objects that extend an input object geometry with a set of procedural effects that defines how the object reacts to the environment, and by a set of scalar fields that defines the influence of the object over of the environment. The user controls the scene by modifying environmental variables, such as temperature or humidity fields. The scene definition is hierarchical: objects can be grouped and their behaviours can be set at each level of the hierarchy. Our per object definition allows us to optimize and accelerate the effects computation, which also enables us to generate large scenes with many geometric details at a very high level of detail. In our implementation, a complex urban scene of 10 000 m, represented with details of less than 1 cm, can be locally modified and entirely regenerated in a few seconds.We propose a novel approach for authoring large scenes with automatic enhancement of objects to create geometric decoration details such as snow cover, icicles, fallen leaves, grass tufts or even trash. We introduce environmental objects that extend an input object geometry with a set of procedural effects that defines how the object reacts to the environment, and by a set of scalar fields that defines the influence of the object over of the environment. The user controls the scene by modifying environmental variables, such as temperature or humidity fields.Item Real‐Time Rendering Techniques with Hardware Tessellation(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Nießner, M.; Keinert, B.; Fisher, M.; Stamminger, M.; Loop, C.; Schäfer, H.; Chen, Min and Zhang, Hao (Richard)Graphics hardware has progressively been optimized to render more triangles with increasingly flexible shading. For highly detailed geometry, interactive applications restricted themselves to performing transforms on fixed geometry, since they could not incur the cost required to generate and transfer smooth or displaced geometry to the GPU at render time. As a result of recent advances in graphics hardware, in particular the GPU tessellation unit, complex geometry can now be generated on the fly within the GPU's rendering pipeline. This has enabled the generation and displacement of smooth parametric surfaces in real‐time applications. However, many well‐established approaches in offline rendering are not directly transferable due to the limited tessellation patterns or the parallel execution model of the tessellation stage. In this survey, we provide an overview of recent work and challenges in this topic by summarizing, discussing, and comparing methods for the rendering of smooth and highly detailed surfaces in real time.Graphics hardware has progressively been optimized to render more triangles with increasingly flexible shading. For highly detailed geometry, interactive applications restricted themselves to performing transforms on fixed geometry, since they could not incur the cost required to generate and transfer smooth or displaced geometry to the GPU at render time. As a result of recent advances in graphics hardware, in particular the GPU tessellation unit, complex geometry can now be generated on the fly within the GPU's rendering pipeline. This has enabled the generation and displacement of smooth parametric surfaces in real‐time applications. However, many well‐established approaches in offline rendering are not directly transferable due to the limited tessellation patterns or the parallel execution model of the tessellation stage.Item Full 3D Plant Reconstruction via Intrusive Acquisition(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Yin, Kangxue; Huang, Hui; Long, Pinxin; Gaissinski, Alexei; Gong, Minglun; Sharf, Andrei; Chen, Min and Zhang, Hao (Richard)Digitally capturing vegetation using off‐the‐shelf scanners is a challenging problem. Plants typically exhibit large self‐occlusions and thin structures which cannot be properly scanned. Furthermore, plants are essentially dynamic, deforming over the time, which yield additional difficulties in the scanning process. In this paper, we present a novel technique for acquiring and modelling of plants and foliage. At the core of our method is an intrusive acquisition approach, which disassembles the plant into disjoint parts that can be accurately scanned and reconstructed offline. We use the reconstructed part meshes as 3D proxies for the reconstruction of the complete plant and devise a global‐to‐local non‐rigid registration technique that preserves specific plant characteristics. Our method is tested on plants of various styles, appearances and characteristics. Results show successful reconstructions with high accuracy with respect to the acquired data.Digitally capturing vegetation using off‐the‐shelf scanners is a challenging problem. Plants typically exhibit large self‐occlusions and thin structures which cannot be properly scanned. Furthermore, plants are essentially dynamic, deforming over the time, which yield additional difficulties in the scanning process. In this paper, we present a novel technique for acquiring and modelling of plants and foliage. At the core of our method is an intrusive acquisition approach, which disassembles the plant into disjoint parts that can be accurately scanned and reconstructed offline.Item Autocorrelation Descriptor for Efficient Co‐Alignment of 3D Shape Collections(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Averkiou, Melinos; Kim, Vladimir G.; Mitra, Niloy J.; Chen, Min and Zhang, Hao (Richard)Co‐aligning a collection of shapes to a consistent pose is a common problem in shape analysis with applications in shape matching, retrieval and visualization. We observe that resolving among some orientations is easier than Others, for example, a common mistake for bicycles is to align front‐to‐back, while even the simplest algorithm would not erroneously pick orthogonal alignment. The key idea of our work is to analyse rotational autocorrelations of shapes to facilitate shape co‐alignment. In particular, we use such an autocorrelation measure of individual shapes to decide which shape pairs might have well‐matching orientations; and, if so, which configurations are likely to produce better alignments. This significantly prunes the number of alignments to be examined, and leads to an efficient, scalable algorithm that performs comparably to state‐of‐the‐art techniques on benchmark data sets, but requires significantly fewer computations, resulting in 2–16× speed improvement in our tests.Co‐aligning a collection of shapes to a consistent pose is a common problem in shape analysis with applications in shape matching, retrieval and visualization. We observe that resolving among some orientations is easier than Others, for example, a common mistake for bicycles is to align front‐to‐back, while even the simplest algorithm would not erroneously pick orthogonal alignment. The key idea of our work is to analyse rotational autocorrelations of shapes to facilitate shape co‐alignment. In particular, we use such an autocorrelation measure of individual shapes to decide which shape pairs might have well‐matching orientations; and, if so, which configurations are likely to produce better alignments. This significantly prunes the number of alignments to be examined, and leads to an efficient, scalable algorithm that performs comparably to state‐of‐the‐art techniques on benchmark data sets, but requires significantly fewer computations, resulting in 2‐16x speed improvement in our tests.Item Fast ANN for High‐Quality Collaborative Filtering(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Tsai, Yun‐Ta; Steinberger, Markus; Pająk, Dawid; Pulli, Kari; Chen, Min and Zhang, Hao (Richard)Collaborative filtering collects similar patches, jointly filters them and scatters the output back to input patches; each pixel gets a contribution from each patch that overlaps with it, allowing signal reconstruction from highly corrupted data. Exploiting self‐similarity, however, requires finding matching image patches, which is an expensive operation. We propose a GPU‐friendly approximated‐nearest‐neighbour(ANN) algorithm that produces high‐quality results for any type of collaborative filter. We evaluate our ANN search against state‐of‐the‐art ANN algorithms in several application domains. Our method is orders of magnitudes faster, yet provides similar or higher quality results than the previous work.Collaborative filtering is a powerful, yet computationally demanding denoising approach. (a) Relying on self‐similarity in the input data, collaborative filtering requires the search for patches which are similar to a reference patch (red). Filtering the patches, either by averaging the pixels or modifying the coefficients after a wavelet or Other transformation, removes unwanted noise, and each output pixel is collaboratively filtered using all the denoised image patches that overlap the pixel. Our method accelerates the process of searching for similar patches and facilitates high‐quality collaborative filtering even on mobile devices. Application examples for collaborative filtering include (left: our output; right: noisy input) (b) denoising an image burst, (c) filtering the samples for global illumination and (d) geometry reconstruction.Item A Hierarchical Approach for Regular Centroidal Voronoi Tessellations(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Wang, L.; Hétroy‐Wheeler, F.; Boyer, E.; Chen, Min and Zhang, Hao (Richard)In this paper, we consider Centroidal Voronoi Tessellations (CVTs) and study their regularity. CVTs are geometric structures that enable regular tessellations of geometric objects and are widely used in shape modelling and analysis. While several efficient iterative schemes, with defined local convergence properties, have been proposed to compute CVTs, little attention has been paid to the evaluation of the resulting cell decompositions. In this paper, we propose a regularity criterion that allows us to evaluate and compare CVTs independently of their sizes and of their cell numbers. This criterion allows us to compare CVTs on a common basis. It builds on earlier theoretical work showing that second moments of cells converge to a lower bound when optimizing CVTs. In addition to proposing a regularity criterion, this paper also considers computational strategies to determine regular CVTs. We introduce a hierarchical framework that propagates regularity over decomposition levels and hence provides CVTs with provably better regularities than existing methods. We illustrate these principles with a wide range of experiments on synthetic and real models.In this paper, we consider Centroidal Voronoi Tessellations (CVTs) and study their regularity. CVTs are geometric structures that enable regular tessellations of geometric objects and are widely used in shape modelling and analysis.While several efficient iterative schemes, with defined local convergence properties, have been proposed to compute CVTs, little attention has been paid to the evaluation of the resulting cell decompositions. In this paper, we propose a regularity criterion that allows us to evaluate and compare CVTs independently of their sizes and of their cell numbers.Item Variational Image Fusion with Optimal Local Contrast(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Hafner, David; Weickert, Joachim; Chen, Min and Zhang, Hao (Richard)In this paper, we present a general variational method for image fusion. In particular, we combine different images of the same subject to a single composite that offers optimal exposedness, saturation and local contrast. Previous research approaches this task by first pre‐computing application‐specific weights based on the input, and then combining these weights with the images to the final composite later on. In contrast, we design our model assumptions directly on the fusion result. To this end, we formulate the output image as a convex combination of the input and incorporate concepts from perceptually inspired contrast enhancement such as a local and non‐linear response. This output‐driven approach is the key to the versatility of our general image fusion model. In this regard, we demonstrate the performance of our fusion scheme with several applications such as exposure fusion, multispectral imaging and decolourization. For all application domains, we conduct thorough validations that illustrate the improvements compared to state‐of‐the‐art approaches that are tailored to the individual tasks. In this paper, we present a general variational method for image fusion. In particular, we combine different images of the same subject to a single composite that offers optimal exposedness, saturation and local contrast. Previous research approaches this task by first pre‐computing application‐specific weights based on the input, and then combining these weights with the images to the final composite later on. In contrast, we design our model assumptions directly on the fusion result. To this end, we formulate the output image as a convex combination of the input and incorporate concepts from perceptually inspired contrast enhancement such as a local and non‐linear response. This output‐driven approach is the key to the versatility of our general image fusion model. In this regard, we demonstrate the performance of our fusion scheme with several applications such as exposure fusion, multispectral imaging and decolourization.Item Continuity and Interpolation Techniques for Computer Graphics(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Gonzalez, F.; Patow, G.; Chen, Min and Zhang, Hao (Richard)Continuity and interpolation have been crucial topics for computer graphics since its very beginnings. Every time we want to interpolate values across some area, we need to take a set of samples over that interpolating region. However, interpolating samples faithfully allowing the results to closely match the underlying functions can be a tricky task as the functions to sample could not be smooth and, in the worst case, it could be even impossible when they are not continuous. In those situations bringing the required continuity is not an easy task, and much work has been done to solve this problem. In this paper, we focus on the state of the art in continuity and interpolation in three stages of the real‐time rendering pipeline. We study these problems and their current solutions in texture space (2D), object space (3D) and screen space. With this review of the literature in these areas, we hope to bring new light and foster research in these fundamental, yet not completely solved problems in computer graphics.Continuity and interpolation have been crucial topics for computer graphics since its very beginnings. Every time we want to interpolate values across some area, we need to take a set of samples over that interpolating region. However, interpolating samples faithfully allowing the results to closely match the underlying functions can be a tricky task as the functions to sample could not be smooth and, in the worst case, it could be even impossible when they are not continuous. In those situations bringing the required continuity is not an easy task, and much work has been done to solve this problem. In this paper, we focus on the state of the art in continuity and interpolation in three stages of the real‐time rendering pipeline.Item Lauren(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Min and Zhang, Hao (Richard)Item Anisotropic Strain Limiting for Quadrilateral and Triangular Cloth Meshes(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Ma, Guanghui; Ye, Juntao; Li, Jituo; Zhang, Xiaopeng; Chen, Min and Zhang, Hao (Richard)The cloth simulation systems often suffer from excessive extension on the polygonal mesh, so an additional strain‐limiting process is typically used as a remedy in the simulation pipeline. A cloth model can be discretized as either a quadrilateral mesh or a triangular mesh, and their strains are measured differently. The edge‐based strain‐limiting method for a quadrilateral mesh creates anisotropic behaviour by nature, as discretization usually aligns the edges along the warp and weft directions. We improve this anisotropic technique by replacing the traditionally used equality constraints with inequality ones in the mathematical optimization, and achieve faster convergence. For a triangular mesh, the state‐of‐the‐art technique measures and constrains the strains along the two principal (and constantly changing) directions in a triangle, resulting in an isotropic behaviour which prohibits shearing. Based on the framework of inequality‐constrained optimization, we propose a warp and weft strain‐limiting formulation. This anisotropic model is more appropriate for textile materials that do not exhibit isotropic strain behaviour.The cloth simulation systems often suffer from excessive extension on the polygonal mesh, so an additional strain‐limiting process is typically used as a remedy in the simulation pipeline. A cloth model can be discretized as either a quadrilateral mesh or a triangular mesh, and their strains are measured differently. The edge‐based strain‐limiting method for a quadrilateral mesh creates anisotropic behaviour by nature, as discretization usually aligns the edges along the warp and weft directions.We improve this anisotropic technique by replacing the traditionally used equality constraints with inequality ones in the mathematical optimization, and achieve faster convergence. For a triangular mesh, the state‐of‐the‐art technique measures and constrains the strains along the two principal (and constantly changing) directions in a triangle, resulting in an isotropic behaviour which prohibits shearing. Based on the framework of inequality‐constrained optimization, we propose a warp and weft strain‐limiting formulation. This anisotropic model is more appropriate for textile materials that do not exhibit isotropic strain behaviour.Item Colour Mapping: A Review of Recent Methods, Extensions and Applications(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Faridul, H. Sheikh; Pouli, T.; Chamaret, C.; Stauder, J.; Reinhard, E.; Kuzovkin, D.; Tremeau, A.; Chen, Min and Zhang, Hao (Richard)The objective of colour mapping or colour transfer methods is to recolour a given image or video by deriving a mapping between that image and anOther image serving as a reference. These methods have received considerable attention in recent years, both in academic literature and industrial applications. Methods for recolouring images have often appeared under the labels of colour correction, colour transfer or colour balancing, to name a few, but their goal is always the same: mapping the colours of one image to anOther. In this paper, we present a comprehensive overview of these methods and offer a classification of current solutions depending not only on their algorithmic formulation but also their range of applications. We also provide a new dataset and a novel evaluation technique called ‘evaluation by colour mapping roundtrip’. We discuss the relative merit of each class of techniques through examples and show how colour mapping solutions can have been applied to a diverse range of problems.The objective of colour mapping or colour transfer methods is to recolour a given image or video by deriving a mapping between that image and anOther image serving as a reference. These methods have received considerable attention in recent years, both in academic literature and industrial applications. Methods for recolouring images have often appeared under the labels of colour correction, colour transfer or colour balancing, to name a few, but their goal is always the same: mapping the colours of one image to anOther. In this paper, we present a comprehensive overview of these methods and offer a classification of current solutions depending not only on their algorithmic formulation but also their range of applications.Item Issue Information(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Min and Zhang, Hao (Richard)Item Robust Cardiac Function Assessment in 4D PC‐MRI Data of the Aorta and Pulmonary Artery(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Köhler, Benjamin; Preim, Uta; Grothoff, Matthias; Gutberlet, Matthias; Fischbach, Katharina; Preim, Bernhard; Chen, Min and Zhang, Hao (Richard)Four‐dimensional phase‐contrast magnetic resonance imaging (4D PC‐MRI) allows the non‐invasive acquisition of time‐resolved, 3D blood flow information. Stroke volumes (SVs) and regurgitation fractions (RFs) are two of the main measures to assess the cardiac function and severity of valvular pathologies. The flow rates in forward and backward direction through a plane above the aortic or pulmonary valve are required for their quantification. Unfortunately, the calculations are highly sensitive towards the plane's angulation since orthogonally passing flow is considered. This often leads to physiologically implausible results. In this work, a robust quantification method is introduced to overcome this problem. Collaborating radiologists and cardiologists were carefully observed while estimating SVs and RFs in various healthy volunteer and patient 4D PC‐MRI data sets with conventional quantification methods, that is, using a single plane above the valve that is freely movable along the centerline. By default it is aligned perpendicular to the vessel's centerline, but free angulation (rotation) is possible. This facilitated the automation of their approach which, in turn, allows to derive statistical information about the plane angulation sensitivity. Moreover, the experts expect a continuous decrease of the blood flow volume along the vessel course. Conventional methods are often unable to produce this behaviour. Thus, we present a procedure to fit a monotonous function that ensures such physiologically plausible results. In addition, this technique was adapted for the usage in branching vessels such as the pulmonary artery. The performed informal evaluation shows the capability of our method to support diagnosis; a parameter evaluation confirms the robustness. Vortex flow was identified as one of the main causes for quantification uncertainties.Four‐dimensional phase‐contrast magnetic resonance imaging (4D PC‐MRI) allows the non‐invasive acquisition of time‐resolved, 3D blood flow information. Stroke volumes (SVs) and regurgitation fractions (RFs) are two of the main measures to assess the cardiac function and severity of valvular pathologies. The flow rates in forward and backward direction through a plane above the aortic or pulmonary valve are required for their quantification. Unfortunately, the calculations are highly sensitive towards the plane's angulation since orthogonally passing flow is considered.Item Editorial(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Min; , Richard; Chen, Min and Zhang, Hao (Richard)Item Mesh Sequence Morphing(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Xue; Feng, Jieqing; Bechmann, Dominique; Chen, Min and Zhang, Hao (Richard)Morphing is an important technique for the generation of special effects in computer animation. However, an analogous technique has not yet been applied to the increasingly prevalent animation representation, i.e. 3D mesh sequences. In this paper, a technique for morphing between two mesh sequences is proposed to simultaneously blend motions and interpolate shapes. Based on all possible combinations of the motions and geometries, a universal framework is proposed to recreate various plausible mesh sequences. To enable a universal framework, we design a skeleton‐driven cage‐based deformation transfer scheme which can account for motion blending and geometry interpolation. To establish one‐to‐one correspondence for interpolating between two mesh sequences, a hybrid cross‐parameterization scheme that fully utilizes the skeleton‐driven cage control structure and adapts user‐specified joint‐like markers, is introduced. The experimental results demonstrate that the framework, not only accomplishes mesh sequence morphing, but also is suitable for a wide range of applications such as deformation transfer, motion blending or transition and dynamic shape interpolation.Morphing is an important technique for the generation of special effects in computer animation. However, an analogous technique has not yet been applied to the increasingly prevalent animation representation, i.e. 3D mesh sequences. In this paper, a technique for morphing between two mesh sequences is proposed to simultaneously blend motions and interpolate shapes. Based on all possible combinations of the motions and geometries, a universal framework is proposed to recreate various plausible mesh sequences. To enable a universal framework, we design a skeleton‐driven cage‐based deformation transfer scheme which can account for motion blending and geometry interpolation.Item Practical Low‐Cost Recovery of Spectral Power Distributions(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Alvarez‐Cortes, Sara; Kunkel, Timo; Masia, Belen; Chen, Min and Zhang, Hao (Richard)Measuring the spectral power distribution of a light source, that is, the emission as a function of wavelength, typically requires the use of spectrophotometers or multi‐spectral cameras. Here, we propose a low‐cost system that enables the recovery of the visible light spectral signature of different types of light sources without requiring highly complex or specialized equipment and using just off‐the‐shelf, widely available components. To do this, a standard Digital Single‐Lens Reflex (DSLR) camera and a diffraction filter are used, sacrificing the spatial dimension for spectral resolution. We present here the image formation model and the calibration process necessary to recover the spectrum, including spectral calibration and amplitude recovery. We also assess the robustness of our method and perform a detailed analysis exploring the parameters influencing its accuracy. Further, we show applications of the system in image processing and rendering.Measuring the spectral power distribution of a light source, that is, the emission as a function of wavelength, typically requires the use of spectrophotometers or multi‐spectral cameras. Here, we propose a low‐cost system that enables the recovery of the visible light spectral signature of different types of light sources without requiring highly complex or specialized equipment and using just off‐the‐shelf, widely available components. To do this, a standard DSLR camera and a diffraction filter are used, sacrificing the spatial dimension for spectral resolution. We present here the image formation model and the calibration process necessary to recover the spectrum, including spectral calibration and amplitude recovery.Item State of the Art in Artistic Editing of Appearance, Lighting and Material(Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Schmidt, Thorsten‐Walther; Pellacini, Fabio; Nowrouzezahrai, Derek; Jarosz, Wojciech; Dachsbacher, Carsten; Chen, Min and Zhang, Hao (Richard)Mimicking the appearance of the real world is a longstanding goal of computer graphics, with several important applications in the feature film, architecture and medical industries. Images with well‐designed shading are an important tool for conveying information about the world, be it the shape and function of a computer‐aided design (CAD) model, or the mood of a movie sequence. However, authoring this content is often a tedious task, even if undertaken by groups of highly trained and experienced artists. Unsurprisingly, numerous methods to facilitate and accelerate this appearance editing task have been proposed, enabling the editing of scene objects' appearances, lighting and materials, as well as entailing the introduction of new interaction paradigms and specialized preview rendering techniques. In this review, we provide a comprehensive survey of artistic appearance, lighting and material editing approaches. We organize this complex and active research area in a structure tailored to academic researchers, graduate students and industry professionals alike. In addition to editing approaches, we discuss how user interaction paradigms and rendering back ends combine to form usable systems for appearance editing. We conclude with a discussion of open problems and challenges to motivate and guide future research.Mimicking the appearance of the real world is a longstanding goal of computer graphics, with several important applications in the feature film, architecture and medical industries. Images with well‐designed shading are an important tool for conveying information about the world, be it the shape and function of a computer‐aided design (CAD) model, or the mood of a movie sequence. However, authoring this content is often a tedious task, even if undertaken by groups of highly trained and experienced artists. Unsurprisingly, numerous methods to facilitate and accelerate this appearance editing task have been proposed, enabling the editing of scene objects' appearances, lighting and materials, as well as entailing the introduction of new interaction paradigms and specialized preview rendering techniques. In this review we provide a comprehensive survey of artistic appearance, lighting, and material editing approaches. We organize this complex and active research area in a structure tailored to academic researchers, graduate students, and industry professionals alike. In addition to editing approaches, we discuss how user interaction paradigms and rendering backends combine to form usable systems for appearance editing. We conclude with a discussion of open problems and challenges to motivate and guide future research.