37-Issue 4
Permanent URI for this collection
Browse
Browsing 37-Issue 4 by Subject "Computing methodologies"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item Acquisition and Validation of Spectral Ground Truth Data for Predictive Rendering of Rough Surfaces(The Eurographics Association and John Wiley & Sons Ltd., 2018) Clausen, Olaf; Marroquim, Ricardo; Fuhrmann, Arnulph; Jakob, Wenzel and Hachisuka, ToshiyaPhysically based rendering uses principles of physics to model the interaction of light with matter. Even though it is possible to achieve photorealistic renderings, it often fails to be predictive. There are two major issues: first, there is no analytic material model that considers all appearance critical characteristics; second, light is in many cases described by only 3 RGB-samples. This leads to the problem that there are different models for different material types and that wavelength dependent phenomena are only approximated. In order to be able to analyze the influence of both problems on the appearance of real world materials, an accurate comparison between rendering and reality is necessary. Therefore, in this work, we acquired a set of precisely and spectrally resolved ground truth data. It consists of the precise description of a new developed reference scene including isotropic BRDFs of 24 color patches, as well as the reference measurements of all patches under 13 different angles inside the reference scene. Our reference data covers rough materials with many different spectral distributions and various illumination situations, from direct light to indirect light dominated situations.Item Deep Adaptive Sampling for Low Sample Count Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2018) Kuznetsov, Alexandr; Kalantari, Nima Khademi; Ramamoorthi, Ravi; Jakob, Wenzel and Hachisuka, ToshiyaRecently, deep learning approaches have proven successful at removing noise from Monte Carlo (MC) rendered images at extremely low sampling rates, e.g., 1-4 samples per pixel (spp). While these methods provide dramatic speedups, they operate on uniformly sampled MC rendered images. However, the full promise of low sample counts requires both adaptive sampling and reconstruction/denoising. Unfortunately, the traditional adaptive sampling techniques fail to handle the cases with low sampling rates, since there is insufficient information to reliably calculate their required features, such as variance and contrast. In this paper, we address this issue by proposing a deep learning approach for joint adaptive sampling and reconstruction of MC rendered images with extremely low sample counts. Our system consists of two convolutional neural networks (CNN), responsible for estimating the sampling map and denoising, separated by a renderer. Specifically, we first render a scene with one spp and then use the first CNN to estimate a sampling map, which is used to distribute three additional samples per pixel on average adaptively. We then filter the resulting render with the second CNN to produce the final denoised image. We train both networks by minimizing the error between the denoised and ground truth images on a set of training scenes. To use backpropagation for training both networks, we propose an approach to effectively compute the gradient of the renderer. We demonstrate that our approach produces better results compared to other sampling techniques. On average, our 4 spp renders are comparable to 6 spp from uniform sampling with deep learning-based denoising. Therefore, 50% more uniformly distributed samples are required to achieve equal quality without adaptive sampling.Item Deep Painting Harmonization(The Eurographics Association and John Wiley & Sons Ltd., 2018) Luan, Fujun; Paris, Sylvain; Shechtman, Eli; Bala, Kavita; Jakob, Wenzel and Hachisuka, ToshiyaCopying an element from a photo and pasting it into a painting is a challenging task. Applying photo compositing techniques in this context yields subpar results that look like a collage - and existing painterly stylization algorithms, which are global, perform poorly when applied locally. We address these issues with a dedicated algorithm that carefully determines the local statistics to be transferred. We ensure both spatial and inter-scale statistical consistency and demonstrate that both aspects are key to generating quality results. To cope with the diversity of abstraction levels and types of paintings, we introduce a technique to adjust the parameters of the transfer depending on the painting. We show that our algorithm produces significantly better results than photo compositing or global stylization techniques and that it enables creative painterly edits that would be otherwise difficult to achieve.Item Efficient Caustic Rendering with Lightweight Photon Mapping(The Eurographics Association and John Wiley & Sons Ltd., 2018) Grittmann, Pascal; Pérard-Gayot, Arsène; Slusallek, Philipp; Křivánek, Jaroslav; Jakob, Wenzel and Hachisuka, ToshiyaRobust and efficient rendering of complex lighting effects, such as caustics, remains a challenging task. While algorithms like vertex connection and merging can render such effects robustly, their significant overhead over a simple path tracer is not always justified and - as we show in this paper - also not necessary. In current rendering solutions, caustics often require the user to enable a specialized algorithm, usually a photon mapper, and hand-tune its parameters. But even with carefully chosen parameters, photon mapping may still trace many photons that the path tracer could sample well enough, or, even worse, that are not visible at all. Our goal is robust, yet lightweight, caustics rendering. To that end, we propose a technique to identify and focus computation on the photon paths that offer significant variance reduction over samples from a path tracer.We apply this technique in a rendering solution combining path tracing and photon mapping. The photon emission is automatically guided towards regions where the photons are useful, i.e., provide substantial variance reduction for the currently rendered image. Our method achieves better photon densities with fewer light paths (and thus photons) than emission guiding approaches based on visual importance. In addition, we automatically determine an appropriate number of photons for a given scene, and the algorithm gracefully degenerates to pure path tracing for scenes that do not benefit from photon mapping.Item Exploiting Repetitions for Image-Based Rendering of Facades(The Eurographics Association and John Wiley & Sons Ltd., 2018) Rodriguez, Simon; Bousseau, Adrien; Durand, Fredo; Drettakis, George; Jakob, Wenzel and Hachisuka, ToshiyaStreet-level imagery is now abundant but does not have sufficient capture density to be usable for Image-Based Rendering (IBR) of facades. We present a method that exploits repetitive elements in facades - such as windows - to perform data augmentation, in turn improving camera calibration, reconstructed geometry and overall rendering quality for IBR. The main intuition behind our approach is that a few views of several instances of an element provide similar information to many views of a single instance of that element. We first select similar instances of an element from 3-4 views of a facade and transform them into a common coordinate system, creating a ''platonic'' element. We use this common space to refine the camera calibration of each view of each instance and to reconstruct a 3D mesh of the element with multi-view stereo, that we regularize to obtain a piecewise-planar mesh aligned with dominant image contours. Observing the same element under multiple views also allows us to identify reflective areas - such as glass panels - which we use at rendering time to generate plausible reflections using an environment map. Our detailed 3D mesh, augmented set of views, and reflection mask enable image-based rendering of much higher quality than results obtained using the input images directly.Item Handling Fluorescence in a Uni-directional Spectral Path Tracer(The Eurographics Association and John Wiley & Sons Ltd., 2018) Mojzík, Michal; Fichet, Alban; Wilkie, Alexander; Jakob, Wenzel and Hachisuka, ToshiyaWe present two separate improvements to the handling of fluorescence effects in modern uni-directional spectral rendering systems. The first is the formulation of a new distance tracking scheme for fluorescent volume materials which exhibit a pronounced wavelength asymmetry. Such volumetric materials are an important and not uncommon corner case of wavelength-shifting media behaviour, and have not been addressed so far in rendering literature. The second one is that we introduce an extension of Hero wavelength sampling which can handle fluorescence events, both on surfaces, and in volumes. Both improvements are useful by themselves, and can be used separately: when used together, they enable the robust inclusion of arbitrary fluorescence effects in modern uni-directional spectral MIS path tracers. Our extension of Hero wavelength sampling is generally useful, while our proposed technique for distance tracking in strongly asymmetric media is admittedly not very efficient. However, it makes the most of a rather difficult situation, and at least allows the inclusion of such media in uni-directional path tracers, albeit at comparatively high cost. Which is still an improvement since up to now, their inclusion was not really possible at all, due to the inability of conventional tracking schemes to generate sampling points in such volume materials.Item On-the-Fly Power-Aware Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2018) Zhang, Yunjin; Ortín, Marta; Arellano, Victor; Wang, Rui; Gutierrez, Diego; Bao, Hujun; Jakob, Wenzel and Hachisuka, ToshiyaPower saving is a prevailing concern in desktop computers and, especially, in battery-powered devices such as mobile phones. This is generating a growing demand for power-aware graphics applications that can extend battery life, while preserving good quality. In this paper, we address this issue by presenting a real-time power-efficient rendering framework, able to dynamically select the rendering configuration with the best quality within a given power budget. Different from the current state of the art, our method does not require precomputation of the whole camera-view space, nor Pareto curves to explore the vast power-error space; as such, it can also handle dynamic scenes. Our algorithm is based on two key components: our novel power prediction model, and our runtime quality error estimation mechanism. These components allow us to search for the optimal rendering configuration at runtime, being transparent to the user. We demonstrate the performance of our framework on two different platforms: a desktop computer, and a mobile device. In both cases, we produce results close to the maximum quality, while achieving significant power savings.Item Runtime Shader Simplification via Instant Search in Reduced Optimization Space(The Eurographics Association and John Wiley & Sons Ltd., 2018) Yuan, Yazhen; Wang, Rui; Hu, Tianlei; Bao, Hujun; Jakob, Wenzel and Hachisuka, ToshiyaTraditional automatic shader simplification simplifies shaders in an offline process, which is typically carried out in a contextoblivious manner or with the use of some example contexts, e.g., certain hardware platforms, scenes, and uniform parameters, etc. As a result, these pre-simplified shaders may fail at adapting to runtime changes of the rendering context that were not considered in the simplification process. In this paper, we propose a new automatic shader simplification technique, which explores two key aspects of a runtime simplification framework: the optimization space and the instant search for optimal simplified shaders with runtime context. The proposed technique still requires a preprocess stage to process the original shader. However, instead of directly computing optimal simplified shaders, the proposed preprocess generates a reduced shader optimization space. In particular, two heuristic estimates of the quality and performance of simplified shaders are presented to group similar variants into representative ones, which serve as basic graph nodes of the simplification dependency graph (SDG), a new representation of the optimization space. At the runtime simplification stage, a parallel discrete optimization algorithm is employed to instantly search in the SDG for optimal simplified shaders. New data-driven cost models are proposed to predict the runtime quality and performance of simplified shaders on the basis of data collected during runtime. Results show that the selected simplifications of complex shaders achieve 1.6 to 2.5 times speedup and still retain high rendering quality.Item Stratified Sampling of Projected Spherical Caps(The Eurographics Association and John Wiley & Sons Ltd., 2018) Ureña, Carlos; Georgiev, Iliyan; Jakob, Wenzel and Hachisuka, ToshiyaWe present a method for uniformly sampling points inside the projection of a spherical cap onto a plane through the sphere's center. To achieve this, we devise two novel area-preserving mappings from the unit square to this projection, which is often an ellipse but generally has a more complex shape. Our maps allow for low-variance rendering of direct illumination from finite and infinite (e.g. sun-like) spherical light sources by sampling their projected solid angle in a stratified manner. We discuss the practical implementation of our maps and show significant quality improvement over traditional uniform spherical cap sampling in a production renderer.