36-Issue 4
Permanent URI for this collection
Browse
Browsing 36-Issue 4 by Title
Now showing 1 - 17 of 17
Results Per Page
Sort Options
Item An Appearance Model for Textile Fibers(The Eurographics Association and John Wiley & Sons Ltd., 2017) Aliaga, Carlos; Castillo, Carlos; Gutierrez, Diego; Otaduy, Miguel A.; López-Moreno, Jorge; Jarabo, Adrián; Zwicker, Matthias and Sander, PedroAccurately modeling how light interacts with cloth is challenging, due to the volumetric nature of cloth appearance and its multiscale structure, where microstructures play a major role in the overall appearance at higher scales. Recently, significant effort has been put on developing better microscopic models for cloth structure, which have allowed rendering fabrics with unprecedented fidelity. However, these highly-detailed representations still make severe simplifications on the scattering by individual fibers forming the cloth, ignoring the impact of fibers' shape, and avoiding to establish connections between the fibers' appearance and their optical and fabrication parameters. In this work we put our focus in the scattering of individual cloth fibers; we introduce a physically-based scattering model for fibers based on their low-level optical and geometric properties, relying on the extensive textile literature for accurate data. We demonstrate that scattering from cloth fibers exhibits much more complexity than current fiber models, showing important differences between cloth type, even in averaged conditions due to longer views. Our model can be plugged in any framework for cloth rendering, matches scattering measurements from real yarns, and is based on actual parameters used in the textile industry, allowing predictive bottom-up definition of cloth appearance.Item Area-Preserving Parameterizations for Spherical Ellipses(The Eurographics Association and John Wiley & Sons Ltd., 2017) Guillén, Ibón; Ureña, Carlos; King, Alan; Fajardo, Marcos; Georgiev, Iliyan; López-Moreno, Jorge; Jarabo, Adrián; Zwicker, Matthias and Sander, PedroWe present new methods for uniformly sampling the solid angle subtended by a disk. To achieve this, we devise two novel area-preserving mappings from the unit square [0;1]2 to a spherical ellipse (i.e. the projection of the disk onto the unit sphere). These mappings allow for low-variance stratified sampling of direct illumination from disk-shaped light sources. We discuss how to efficiently incorporate our methods into a production renderer and demonstrate the quality of our maps, showing significantly lower variance than previous work.Item Attribute-preserving Gamut Mapping of Measured BRDFs(The Eurographics Association and John Wiley & Sons Ltd., 2017) Sun, Tiancheng; Serrano, Ana; Gutierrez, Diego; Masia, Belen; Zwicker, Matthias and Sander, PedroReproducing the appearance of real-world materials using current printing technology is problematic. The reduced number of inks available define the printer's limited gamut, creating distortions in the printed appearance that are hard to control. Gamut mapping refers to the process of bringing an out-of-gamut material appearance into the printer's gamut, while minimizing such distortions as much as possible. We present a novel two-step gamut mapping algorithm that allows users to specify which perceptual attribute of the original material they want to preserve (such as brightness, or roughness). In the first step, we work in the low-dimensional intuitive appearance space recently proposed by Serrano et al. [SGM 16], and adjust achromatic reflectance via an objective function that strives to preserve certain attributes. From such intermediate representation, we then perform an image-based optimization including color information, to bring the BRDF into gamut. We show, both objectively and through a user study, how our method yields superior results compared to the state of the art, with the additional advantage that the user can specify which visual attributes need to be preserved. Moreover, we show how this approach can also be used for attribute-preserving material editing.Item Bayesian Collaborative Denoising for Monte Carlo Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2017) Boughida, Malik; Boubekeur, Tamy; Zwicker, Matthias and Sander, PedroThe stochastic nature of Monte Carlo rendering algorithms inherently produces noisy images. Essentially, three approaches have been developed to solve this issue: improving the ray-tracing strategies to reduce pixel variance, providing adaptive sampling by increasing the number of rays in regions needing so, and filtering the noisy image as a post-process. Although the algorithms from the latter category introduce bias, they remain highly attractive as they quickly improve the visual quality of the images, are compatible with all sorts of rendering effects, have a low computational cost and, for some of them, avoid deep modifications of the rendering engine. In this paper, we build upon recent advances in both non-local and collaborative filtering methods to propose a new efficient denoising operator for Monte Carlo rendering. Starting from the local statistics which emanate from the pixels sample distribution, we enrich the image with local covariance measures and introduce a nonlocal bayesian filter which is specifically designed to address the noise stemming from Monte Carlo rendering. The resulting algorithm only requires the rendering engine to provide for each pixel a histogram and a covariance matrix of its color samples. Compared to state-of-the-art sample-based methods, we obtain improved denoising results, especially in dark areas, with a large increase in speed and more robustness with respect to the main parameter of the algorithm. We provide a detailed mathematical exposition of our bayesian approach, discuss extensions to multiscale execution, adaptive sampling and animated scenes, and experimentally validate it on a collection of scenes.Item Bi-Layer Textures: a Model for Synthesis and Deformation of Composite Textures(The Eurographics Association and John Wiley & Sons Ltd., 2017) Guingo, Geoffrey; Sauvage, Basile; Dischler, Jean-Michel; Cani, Marie-Paule; Zwicker, Matthias and Sander, PedroWe propose a bi-layer representation for textures which is suitable for on-the-fly synthesis of unbounded textures from an input exemplar. The goal is to improve the variety of outputs while preserving plausible small-scale details. The insight is that many natural textures can be decomposed into a series of fine scale Gaussian patterns which have to be faithfully reproduced, and some non-homogeneous, larger scale structure which can be deformed to add variety. Our key contribution is a novel, bi-layer representation for such textures. It includes a model for spatially-varying Gaussian noise, together with a mechanism enabling synchronization with a structure layer. We propose an automatic method to instantiate our bi-layer model from an input exemplar. At the synthesis stage, the two layers are generated independently, synchronized and added, preserving the consistency of details even when the structure layer has been deformed to increase variety. We show on a variety of complex, real textures, that our method reduces repetition artifacts while preserving a coherent appearance.Item Decomposing Single Images for Layered Photo Retouching(The Eurographics Association and John Wiley & Sons Ltd., 2017) Innamorati, Carlo; Ritschel, Tobias; Weyrich, Tim; Mitra, Niloy J.; Zwicker, Matthias and Sander, PedroPhotographers routinely compose multiple manipulated photos of the same scene into a single image, producing a fidelity difficult to achieve using any individual photo. Alternately, 3D artists set up rendering systems to produce layered images to isolate individual aspects of the light transport, which are composed into the final result in post-production. Regrettably, these approaches either take considerable time and effort to capture, or remain limited to synthetic scenes. In this paper, we suggest a method to decompose a single image into multiple layers that approximates effects such as shadow, diffuse illumination, albedo, and specular shading. To this end, we extend the idea of intrinsic images along two axes: first, by complementing shading and reflectance with specularity and occlusion, and second, by introducing directional dependence. We do so by training a convolutional neural network (CNN) with synthetic data. Such decompositions can then be manipulated in any off-the-shelf image manipulation software and composited back. We demonstrate the effectiveness of our decomposition on synthetic (i. e., rendered) and real data (i. e., photographs), and use them for photo manipulations, which are otherwise impossible to perform based on single images. We provide comparisons with state-of-the-art methods and also evaluate the quality of our decompositions via a user study measuring the effectiveness of the resultant photo retouching setup. Supplementary material and code are available for research use at geometry.cs.ucl.ac.uk/projects/2017/layered-retouching.Item Deep Shading: Convolutional Neural Networks for Screen Space Shading(The Eurographics Association and John Wiley & Sons Ltd., 2017) Nalbach, Oliver; Arabadzhiyska, Elena; Mehta, Dushyant; Seidel, Hans-Peter; Ritschel, Tobias; Zwicker, Matthias and Sander, PedroIn computer vision, convolutional neural networks (CNNs) achieve unprecedented performance for inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen space shading has boosted the quality of real-time rendering, converting the same kind of attributes of a virtual scene back to appearance, enabling effects like ambient occlusion, indirect light, scattering and many more. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading renders screen space effects at competitive quality and speed while not being programmed by human experts but learned from example images.Item Eurographics Symposium on Rendering 2017: Frontmatter(Eurographics Association, 2017) Zwicker, Matthias; Sander, Pedro;Item Fast Hardware Construction and Refitting of Quantized Bounding Volume Hierarchies(The Eurographics Association and John Wiley & Sons Ltd., 2017) Viitanen, Timo; Koskela, Matias; Jääskeläinen, Pekka; Immonen, Kalle; Takala, Jarmo; Zwicker, Matthias and Sander, PedroThere is recent interest in GPU architectures designed to accelerate ray tracing, especially on mobile systems with limited memory bandwidth. A promising recent approach is to store and traverse Bounding Volume Hierarchies (BVHs), used to accelerate ray tracing, in low arithmetic precision. However, so far there is no research on refitting or construction of such compressed BVHs, which is necessary for any scenes with dynamic content. We find that in a hardware-accelerated tree update, significant memory traffic and runtime savings are available from streaming, bottom-up compression. Novel algorithmic techniques of modulo encoding and treelet-based compression are proposed to reduce backtracking inherent in bottom-up compression. Together, these techniques reduce backtracking to a small fraction. Compared to a separate top-down compression pass, streaming bottom-up compression with the proposed optimizations saves on average 42% of memory accesses for LBVH construction and 56% for refitting of compressed BVHs, over 16 test scenes. In architectural simulation, the proposed streaming compression reduces LBVH runtime by 20% compared to a single-precision build, and 41% compared to a single-precision build followed by top-down compression. Since memory traffic dominates the energy cost of refitting and LBVH construction, energy consumption is expected to fall by a similar fraction.Item Fiber-Level On-the-Fly Procedural Textiles(The Eurographics Association and John Wiley & Sons Ltd., 2017) Luan, Fujun; Zhao, Shuang; Bala, Kavita; Zwicker, Matthias and Sander, PedroProcedural textile models are compact, easy to edit, and can achieve state-of-the-art realism with fiber-level details. However, these complex models generally need to be fully instantiated (aka. realized) into 3D volumes or fiber meshes and stored in memory, We introduce a novel realization-minimizing technique that enables physically based rendering of procedural textiles, without the need of full model realizations. The key ingredients of our technique are new data structures and search algorithms that look up regular and flyaway fibers on the fly, efficiently and consistently. Our technique works with compact fiber-level procedural yarn models in their exact form with no approximation imposed. In practice, our method can render very large models that are practically unrenderable using existing methods, while using considerably less memory (60-200 less) and achieving good performance.Item Line Integration for Rendering Heterogeneous Emissive Volumes(The Eurographics Association and John Wiley & Sons Ltd., 2017) Simon, Florian; Hanika, Johannes; Zirr, Tobias; Dachsbacher, Carsten; Zwicker, Matthias and Sander, PedroEmissive media are often challenging to render: in thin regions where only few scattering events occur the emission is poorly sampled, while sampling events for emission can be disadvantageous due to absorption in dense regions. We extend the standard path space measurement contribution to also collect emission along path segments, not only at vertices. We apply this extension to two estimators: extending paths via scattering and distance sampling, and next event estimation. In order to do so, we unify the two approaches and derive the corresponding Monte Carlo estimators to interpret next event estimation as a solid angle sampling technique. We avoid connecting paths to vertices hidden behind dense absorbing layers of smoke by also including transmittance sampling into next event estimation. We demonstrate the advantages of our line integration approach which generates estimators with lower variance since entire segments are accounted for. Also, our novel forward next event estimation technique yields faster run times compared to previous next event estimation as it penetrates less deeply into dense volumes.Item Minimal Warping: Planning Incremental Novel-view Synthesis(The Eurographics Association and John Wiley & Sons Ltd., 2017) Leimkühler, Thomas; Seidel, Hans-Peter; Ritschel, Tobias; Zwicker, Matthias and Sander, PedroObserving that many visual effects (depth-of-field, motion blur, soft shadows, spectral effects) and several sampling modalities (time, stereo or light fields) can be expressed as a sum of many pinhole camera images, we suggest a novel efficient image synthesis framework that exploits coherency among those images. We introduce the notion of ''distribution flow'' that represents the 2D image deformation in response to changes in the high-dimensional time-, lens-, area light-, spectral-, etc. coordinates. Our approach plans the optimal traversal of the distribution space of all required pinhole images, such that starting from one representative root image, which is incrementally changed (warped) in a minimal fashion, pixels move at most by one pixel, if at all. The incremental warping allows extremely simple warping code, typically requiring half a millisecond on an Nvidia Geforce GTX 980Ti GPU per pinhole image. We show, how the bounded sampling does introduce very little errors in comparison to re-rendering or a common warping-based solution. Our approach allows efficient previews for arbitrary combinations of distribution effects and imaging modalities with little noise and high visual fidelity.Item Multiple Axis-Aligned Filters for Rendering of Combined Distribution Effects(The Eurographics Association and John Wiley & Sons Ltd., 2017) Wu, Lifan; Yan, Ling-Qi; Kuznetsov, Alexandr; Ramamoorthi, Ravi; Zwicker, Matthias and Sander, PedroDistribution effects such as diffuse global illumination, soft shadows and depth of field, are most accurately rendered using Monte Carlo ray or path tracing. However, physically accurate algorithms can take hours to converge to a noise-free image. A recent body of work has begun to bridge this gap, showing that both individual and multiple effects can be achieved accurately and efficiently. These methods use sparse sampling, GPU raytracers, and adaptive filtering for reconstruction. They are based on a Fourier analysis, which models distribution effects as a wedge in the frequency domain. The wedge can be approximated as a single large axis-aligned filter, which is fast but retains a large area outside the wedge, and therefore requires a higher sampling rate; or a tighter sheared filter, which is slow to compute. The state-of-the-art fast sheared filtering method combines low sampling rate and efficient filtering, but has been demonstrated for individual distribution effects only, and is limited by high-dimensional data storage and processing. We present a novel filter for efficient rendering of combined effects, involving soft shadows and depth of field, with global (diffuse indirect) illumination. We approximate the wedge spectrum with multiple axis-aligned filters, marrying the speed of axis-aligned filtering with an even more accurate (compact and tighter) representation than sheared filtering. We demonstrate rendering of single effects at comparable sampling and frame-rates to fast sheared filtering. Our main practical contribution is in rendering multiple distribution effects, which have not even been demonstrated accurately with sheared filtering. For this case, we present an average speedup of 6 compared with previous axis-aligned filtering methods.Item Practical Path Guiding for Efficient Light-transport Simulation(The Eurographics Association and John Wiley & Sons Ltd., 2017) Müller, Thomas; Gross, Markus; Novák, Jan; Zwicker, Matthias and Sander, PedroWe present a robust, unbiased technique for intelligent light-path construction in path-tracing algorithms. Inspired by existing path-guiding algorithms, our method learns an approximate representation of the scene's spatio-directional radiance field in an unbiased and iterative manner. To that end, we propose an adaptive spatio-directional hybrid data structure, referred to as SD-tree, for storing and sampling incident radiance. The SD-tree consists of an upper part-a binary tree that partitions the 3D spatial domain of the light field-and a lower part-a quadtree that partitions the 2D directional domain. We further present a principled way to automatically budget training and rendering computations to minimize the variance of the final image. Our method does not require tuning hyperparameters, although we allow limiting the memory footprint of the SD-tree. The aforementioned properties, its ease of implementation, and its stable performance make our method compatible with production environments. We demonstrate the merits of our method on scenes with difficult visibility, detailed geometry, and complex specular-glossy light transport, achieving better performance than previous state-of-the-art algorithms.Item Real-Time Linear BRDF MIP-Mapping(The Eurographics Association and John Wiley & Sons Ltd., 2017) Xu, Chao; Wang, Rui; Zhao, Shuang; Bao, Hujun; Zwicker, Matthias and Sander, PedroWe present a new technique to jointly MIP-map BRDF and normal maps. Starting with generating an instant BRDF map, our technique builds its MIP-mapped versions based on a highly efficient algorithm that interpolates von Mises-Fisher (vMF) distributions. In our BRDF MIP-maps, each pixel stores a vMF mixture approximating the average of all BRDF lobes from the finest level. Our method is capable of jointly MIP-mapping BRDF and normal maps, even with high-frequency variations, at real-time while preserving high-quality reflectance details. Further, it is very fast, easy to implement, and requires no precomputation.Item Stochastic Light Culling for VPLs on GGX Microsurfaces(The Eurographics Association and John Wiley & Sons Ltd., 2017) Tokuyoshi, Yusuke; Harada, Takahiro; Zwicker, Matthias and Sander, PedroThis paper introduces a real-time rendering method for single-bounce glossy caustics created by GGX microsurfaces. Our method is based on stochastic light culling of virtual point lights (VPLs), which is an unbiased culling method that randomly determines the range of influence of light for each VPL. While the original stochastic light culling method uses a bounding sphere defined by that light range for coarse culling (e.g., tiled culling), we have further extended the method by calculating a tighter bounding ellipsoid for glossy VPLs. Such bounding ellipsoids can be calculated analytically under the classic Phong reflection model which cannot be applied to physically plausible materials used in modern computer graphics productions. In order to use stochastic light culling for such modern materials, this paper derives a simple analytical solution to generate a tighter bounding ellipsoid for VPLs on GGX microsurfaces. This paper also presents an efficient implementation for culling bounding ellipsoids in the context of tiled culling. When stochastic light culling is combined with interleaved sampling for a scene with tens of thousands of VPLs, this tiled culling is faster than conservative rasterization-based clustered shading which is a state-of-the-art culling technique that supports bounding ellipsoids. Using these techniques, VPLs are culled efficiently for completely dynamic single-bounce glossy caustics reflected from GGX microsurfaces.Item Variance and Convergence Analysis of Monte Carlo Line and Segment Sampling(The Eurographics Association and John Wiley & Sons Ltd., 2017) Singh, Gurprit; Miller, Bailey; Jarosz, Wojciech; Zwicker, Matthias and Sander, PedroRecently researchers have started employing Monte Carlo-like line sample estimators in rendering, demonstrating dramatic reductions in variance (visible noise) for effects such as soft shadows, defocus blur, and participating media. Unfortunately, there is currently no formal theoretical framework to predict and analyze Monte Carlo variance using line and segment samples which have inherently anisotropic Fourier power spectra. In this work, we propose a theoretical formulation for lines and finite-length segment samples in the frequency domain that allows analyzing their anisotropic power spectra using previous isotropic variance and convergence tools. Our analysis shows that judiciously oriented line samples not only reduce the dimensionality but also pre-filter C0 discontinuities, resulting in further improvement in variance and convergence rates. Our theoretical insights also explain how finite-length segment samples impact variance and convergence rates only by pre-filtering discontinuities. We further extend our analysis to consider (uncorrelated) multi-directional line (segment) sampling, showing that such schemes can increase variance compared to unidirectional sampling. We validate our theoretical results with a set of experiments including direct lighting, ambient occlusion, and volumetric caustics using points, lines, and segment samples.