36-Issue 4
Permanent URI for this collection
Browse
Browsing 36-Issue 4 by Subject "Computing methodologies"
Now showing 1 - 11 of 11
Results Per Page
Sort Options
Item An Appearance Model for Textile Fibers(The Eurographics Association and John Wiley & Sons Ltd., 2017) Aliaga, Carlos; Castillo, Carlos; Gutierrez, Diego; Otaduy, Miguel A.; López-Moreno, Jorge; Jarabo, Adrián; Zwicker, Matthias and Sander, PedroAccurately modeling how light interacts with cloth is challenging, due to the volumetric nature of cloth appearance and its multiscale structure, where microstructures play a major role in the overall appearance at higher scales. Recently, significant effort has been put on developing better microscopic models for cloth structure, which have allowed rendering fabrics with unprecedented fidelity. However, these highly-detailed representations still make severe simplifications on the scattering by individual fibers forming the cloth, ignoring the impact of fibers' shape, and avoiding to establish connections between the fibers' appearance and their optical and fabrication parameters. In this work we put our focus in the scattering of individual cloth fibers; we introduce a physically-based scattering model for fibers based on their low-level optical and geometric properties, relying on the extensive textile literature for accurate data. We demonstrate that scattering from cloth fibers exhibits much more complexity than current fiber models, showing important differences between cloth type, even in averaged conditions due to longer views. Our model can be plugged in any framework for cloth rendering, matches scattering measurements from real yarns, and is based on actual parameters used in the textile industry, allowing predictive bottom-up definition of cloth appearance.Item Area-Preserving Parameterizations for Spherical Ellipses(The Eurographics Association and John Wiley & Sons Ltd., 2017) Guillén, Ibón; Ureña, Carlos; King, Alan; Fajardo, Marcos; Georgiev, Iliyan; López-Moreno, Jorge; Jarabo, Adrián; Zwicker, Matthias and Sander, PedroWe present new methods for uniformly sampling the solid angle subtended by a disk. To achieve this, we devise two novel area-preserving mappings from the unit square [0;1]2 to a spherical ellipse (i.e. the projection of the disk onto the unit sphere). These mappings allow for low-variance stratified sampling of direct illumination from disk-shaped light sources. We discuss how to efficiently incorporate our methods into a production renderer and demonstrate the quality of our maps, showing significantly lower variance than previous work.Item Attribute-preserving Gamut Mapping of Measured BRDFs(The Eurographics Association and John Wiley & Sons Ltd., 2017) Sun, Tiancheng; Serrano, Ana; Gutierrez, Diego; Masia, Belen; Zwicker, Matthias and Sander, PedroReproducing the appearance of real-world materials using current printing technology is problematic. The reduced number of inks available define the printer's limited gamut, creating distortions in the printed appearance that are hard to control. Gamut mapping refers to the process of bringing an out-of-gamut material appearance into the printer's gamut, while minimizing such distortions as much as possible. We present a novel two-step gamut mapping algorithm that allows users to specify which perceptual attribute of the original material they want to preserve (such as brightness, or roughness). In the first step, we work in the low-dimensional intuitive appearance space recently proposed by Serrano et al. [SGM 16], and adjust achromatic reflectance via an objective function that strives to preserve certain attributes. From such intermediate representation, we then perform an image-based optimization including color information, to bring the BRDF into gamut. We show, both objectively and through a user study, how our method yields superior results compared to the state of the art, with the additional advantage that the user can specify which visual attributes need to be preserved. Moreover, we show how this approach can also be used for attribute-preserving material editing.Item Bayesian Collaborative Denoising for Monte Carlo Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2017) Boughida, Malik; Boubekeur, Tamy; Zwicker, Matthias and Sander, PedroThe stochastic nature of Monte Carlo rendering algorithms inherently produces noisy images. Essentially, three approaches have been developed to solve this issue: improving the ray-tracing strategies to reduce pixel variance, providing adaptive sampling by increasing the number of rays in regions needing so, and filtering the noisy image as a post-process. Although the algorithms from the latter category introduce bias, they remain highly attractive as they quickly improve the visual quality of the images, are compatible with all sorts of rendering effects, have a low computational cost and, for some of them, avoid deep modifications of the rendering engine. In this paper, we build upon recent advances in both non-local and collaborative filtering methods to propose a new efficient denoising operator for Monte Carlo rendering. Starting from the local statistics which emanate from the pixels sample distribution, we enrich the image with local covariance measures and introduce a nonlocal bayesian filter which is specifically designed to address the noise stemming from Monte Carlo rendering. The resulting algorithm only requires the rendering engine to provide for each pixel a histogram and a covariance matrix of its color samples. Compared to state-of-the-art sample-based methods, we obtain improved denoising results, especially in dark areas, with a large increase in speed and more robustness with respect to the main parameter of the algorithm. We provide a detailed mathematical exposition of our bayesian approach, discuss extensions to multiscale execution, adaptive sampling and animated scenes, and experimentally validate it on a collection of scenes.Item Bi-Layer Textures: a Model for Synthesis and Deformation of Composite Textures(The Eurographics Association and John Wiley & Sons Ltd., 2017) Guingo, Geoffrey; Sauvage, Basile; Dischler, Jean-Michel; Cani, Marie-Paule; Zwicker, Matthias and Sander, PedroWe propose a bi-layer representation for textures which is suitable for on-the-fly synthesis of unbounded textures from an input exemplar. The goal is to improve the variety of outputs while preserving plausible small-scale details. The insight is that many natural textures can be decomposed into a series of fine scale Gaussian patterns which have to be faithfully reproduced, and some non-homogeneous, larger scale structure which can be deformed to add variety. Our key contribution is a novel, bi-layer representation for such textures. It includes a model for spatially-varying Gaussian noise, together with a mechanism enabling synchronization with a structure layer. We propose an automatic method to instantiate our bi-layer model from an input exemplar. At the synthesis stage, the two layers are generated independently, synchronized and added, preserving the consistency of details even when the structure layer has been deformed to increase variety. We show on a variety of complex, real textures, that our method reduces repetition artifacts while preserving a coherent appearance.Item Deep Shading: Convolutional Neural Networks for Screen Space Shading(The Eurographics Association and John Wiley & Sons Ltd., 2017) Nalbach, Oliver; Arabadzhiyska, Elena; Mehta, Dushyant; Seidel, Hans-Peter; Ritschel, Tobias; Zwicker, Matthias and Sander, PedroIn computer vision, convolutional neural networks (CNNs) achieve unprecedented performance for inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen space shading has boosted the quality of real-time rendering, converting the same kind of attributes of a virtual scene back to appearance, enabling effects like ambient occlusion, indirect light, scattering and many more. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading renders screen space effects at competitive quality and speed while not being programmed by human experts but learned from example images.Item Fast Hardware Construction and Refitting of Quantized Bounding Volume Hierarchies(The Eurographics Association and John Wiley & Sons Ltd., 2017) Viitanen, Timo; Koskela, Matias; Jääskeläinen, Pekka; Immonen, Kalle; Takala, Jarmo; Zwicker, Matthias and Sander, PedroThere is recent interest in GPU architectures designed to accelerate ray tracing, especially on mobile systems with limited memory bandwidth. A promising recent approach is to store and traverse Bounding Volume Hierarchies (BVHs), used to accelerate ray tracing, in low arithmetic precision. However, so far there is no research on refitting or construction of such compressed BVHs, which is necessary for any scenes with dynamic content. We find that in a hardware-accelerated tree update, significant memory traffic and runtime savings are available from streaming, bottom-up compression. Novel algorithmic techniques of modulo encoding and treelet-based compression are proposed to reduce backtracking inherent in bottom-up compression. Together, these techniques reduce backtracking to a small fraction. Compared to a separate top-down compression pass, streaming bottom-up compression with the proposed optimizations saves on average 42% of memory accesses for LBVH construction and 56% for refitting of compressed BVHs, over 16 test scenes. In architectural simulation, the proposed streaming compression reduces LBVH runtime by 20% compared to a single-precision build, and 41% compared to a single-precision build followed by top-down compression. Since memory traffic dominates the energy cost of refitting and LBVH construction, energy consumption is expected to fall by a similar fraction.Item Fiber-Level On-the-Fly Procedural Textiles(The Eurographics Association and John Wiley & Sons Ltd., 2017) Luan, Fujun; Zhao, Shuang; Bala, Kavita; Zwicker, Matthias and Sander, PedroProcedural textile models are compact, easy to edit, and can achieve state-of-the-art realism with fiber-level details. However, these complex models generally need to be fully instantiated (aka. realized) into 3D volumes or fiber meshes and stored in memory, We introduce a novel realization-minimizing technique that enables physically based rendering of procedural textiles, without the need of full model realizations. The key ingredients of our technique are new data structures and search algorithms that look up regular and flyaway fibers on the fly, efficiently and consistently. Our technique works with compact fiber-level procedural yarn models in their exact form with no approximation imposed. In practice, our method can render very large models that are practically unrenderable using existing methods, while using considerably less memory (60-200 less) and achieving good performance.Item Line Integration for Rendering Heterogeneous Emissive Volumes(The Eurographics Association and John Wiley & Sons Ltd., 2017) Simon, Florian; Hanika, Johannes; Zirr, Tobias; Dachsbacher, Carsten; Zwicker, Matthias and Sander, PedroEmissive media are often challenging to render: in thin regions where only few scattering events occur the emission is poorly sampled, while sampling events for emission can be disadvantageous due to absorption in dense regions. We extend the standard path space measurement contribution to also collect emission along path segments, not only at vertices. We apply this extension to two estimators: extending paths via scattering and distance sampling, and next event estimation. In order to do so, we unify the two approaches and derive the corresponding Monte Carlo estimators to interpret next event estimation as a solid angle sampling technique. We avoid connecting paths to vertices hidden behind dense absorbing layers of smoke by also including transmittance sampling into next event estimation. We demonstrate the advantages of our line integration approach which generates estimators with lower variance since entire segments are accounted for. Also, our novel forward next event estimation technique yields faster run times compared to previous next event estimation as it penetrates less deeply into dense volumes.Item Stochastic Light Culling for VPLs on GGX Microsurfaces(The Eurographics Association and John Wiley & Sons Ltd., 2017) Tokuyoshi, Yusuke; Harada, Takahiro; Zwicker, Matthias and Sander, PedroThis paper introduces a real-time rendering method for single-bounce glossy caustics created by GGX microsurfaces. Our method is based on stochastic light culling of virtual point lights (VPLs), which is an unbiased culling method that randomly determines the range of influence of light for each VPL. While the original stochastic light culling method uses a bounding sphere defined by that light range for coarse culling (e.g., tiled culling), we have further extended the method by calculating a tighter bounding ellipsoid for glossy VPLs. Such bounding ellipsoids can be calculated analytically under the classic Phong reflection model which cannot be applied to physically plausible materials used in modern computer graphics productions. In order to use stochastic light culling for such modern materials, this paper derives a simple analytical solution to generate a tighter bounding ellipsoid for VPLs on GGX microsurfaces. This paper also presents an efficient implementation for culling bounding ellipsoids in the context of tiled culling. When stochastic light culling is combined with interleaved sampling for a scene with tens of thousands of VPLs, this tiled culling is faster than conservative rasterization-based clustered shading which is a state-of-the-art culling technique that supports bounding ellipsoids. Using these techniques, VPLs are culled efficiently for completely dynamic single-bounce glossy caustics reflected from GGX microsurfaces.Item Variance and Convergence Analysis of Monte Carlo Line and Segment Sampling(The Eurographics Association and John Wiley & Sons Ltd., 2017) Singh, Gurprit; Miller, Bailey; Jarosz, Wojciech; Zwicker, Matthias and Sander, PedroRecently researchers have started employing Monte Carlo-like line sample estimators in rendering, demonstrating dramatic reductions in variance (visible noise) for effects such as soft shadows, defocus blur, and participating media. Unfortunately, there is currently no formal theoretical framework to predict and analyze Monte Carlo variance using line and segment samples which have inherently anisotropic Fourier power spectra. In this work, we propose a theoretical formulation for lines and finite-length segment samples in the frequency domain that allows analyzing their anisotropic power spectra using previous isotropic variance and convergence tools. Our analysis shows that judiciously oriented line samples not only reduce the dimensionality but also pre-filter C0 discontinuities, resulting in further improvement in variance and convergence rates. Our theoretical insights also explain how finite-length segment samples impact variance and convergence rates only by pre-filtering discontinuities. We further extend our analysis to consider (uncorrelated) multi-directional line (segment) sampling, showing that such schemes can increase variance compared to unidirectional sampling. We validate our theoretical results with a set of experiments including direct lighting, ambient occlusion, and volumetric caustics using points, lines, and segment samples.