38-Issue 4

Permanent URI for this collection

Strasbourg, France | July 10-12, 2019
(Rendering - DL only track and Industry track are available here.)
Materials and Reflectance
Flexible SVBRDF Capture with a Multi-Image Deep Network
Valentin Deschaintre, Miika Aittala, Fredo Durand, George Drettakis, and Adrien Bousseau
On-Site Example-Based Material Appearance Acquisition
Yiming Lin, Pieter Peers, and Abhijeet Ghosh
Glint Rendering based on a Multiple-Scattering Patch BRDF
Xavier Chermain, Frédéric Claux, and Stéphane Mérillou
Microfacet Model Regularization for Robust Light Transport
Johannes Jendersie and Thorsten Grosch
High Performance Rendering
Ray Classification for Accelerated BVH Traversal
Jakub Hendrich, Adam Pospíšil, Daniel Meister, and Jiří Bittner
Scalable Virtual Ray Lights Rendering for Participating Media
Nicolas Vibert, Adrien Gruson, Heine Stokholm, Troels Mortensen, Wojciech Jarosz, Toshiya Hachisuka, and Derek Nowrouzezahrai
Adaptive Temporal Sampling for Volumetric Path Tracing of Medical Data
Jana Martschinke, Stefan Hartnagel, Benjamin Keinert, Klaus Engel, and Marc Stamminger
Spectral Effects
Real-time Image-based Lighting of Microfacet BRDFs with Varying Iridescence
Tom Kneiphof, Tim Golla, and Reinhard Klein
Wide Gamut Spectral Upsampling with Fluorescence
Alisa Jung, Alexander Wilkie, Johannes Hanika, Wenzel Jakob, and Carsten Dachsbacher
Analytic Spectral Integration of Birefringence-Induced Iridescence
Shlomi Steinberg
Light Transport
Quantifying the Error of Light Transport Algorithms
Adam Celarek, Wenzel Jakob, Michael Wimmer, and Jaakko Lehtinen
Adaptive BRDF-Oriented Multiple Importance Sampling of Many Lights
Yifan Liu, Kun Xu, and Ling-Qi Yan
Sampling
Orthogonal Array Sampling for Monte Carlo Rendering
Wojciech Jarosz, Afnan Enayet, Andrew Kensler, Charlie Kilpatrick, and Per Christensen
Distributing Monte Carlo Errors as a Blue Noise in Screen Space by Permuting Pixel Seeds Between Frames
Eric Heitz and Laurent Belcour
Combining Point and Line Samples for Direct Illumination
Katherine Salesin and Wojciech Jarosz
Interactive and Real-time Rendering
Tessellated Shading Streaming
Jozef Hladky, Hans-Peter Seidel, and Markus Steinberger
Global Illumination Shadow Layers
François Desrichard, David Vanderhaeghe, and Mathias Paulin
Deep Learning
Learned Fitting of Spatially Varying BRDFs
Sebastian Merzbach, Max Hermann, Martin Rump, and Reinhard Klein
Deep-learning the Latent Space of Light Transport
Pedro Hermosilla, Sebastian Maisch, Tobias Ritschel, and Timo Ropinski

BibTeX (38-Issue 4)
                
@article{
10.1111:cgf.13765,
journal = {Computer Graphics Forum}, title = {{
Flexible SVBRDF Capture with a Multi-Image Deep Network}},
author = {
Deschaintre, Valentin
 and
Aittala, Miika
 and
Durand, Fredo
 and
Drettakis, George
 and
Bousseau, Adrien
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13765}
}
                
@article{
10.1111:cgf.13766,
journal = {Computer Graphics Forum}, title = {{
On-Site Example-Based Material Appearance Acquisition}},
author = {
Lin, Yiming
 and
Peers, Pieter
 and
Ghosh, Abhijeet
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13766}
}
                
@article{
10.1111:cgf.13767,
journal = {Computer Graphics Forum}, title = {{
Glint Rendering based on a Multiple-Scattering Patch BRDF}},
author = {
Chermain, Xavier
 and
Claux, Frédéric
 and
Mérillou, Stéphane
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13767}
}
                
@article{
10.1111:cgf.13768,
journal = {Computer Graphics Forum}, title = {{
Microfacet Model Regularization for Robust Light Transport}},
author = {
Jendersie, Johannes
 and
Grosch, Thorsten
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13768}
}
                
@article{
10.1111:cgf.13769,
journal = {Computer Graphics Forum}, title = {{
Ray Classification for Accelerated BVH Traversal}},
author = {
Hendrich, Jakub
 and
Pospíšil, Adam
 and
Meister, Daniel
 and
Bittner, Jiří
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13769}
}
                
@article{
10.1111:cgf.13770,
journal = {Computer Graphics Forum}, title = {{
Scalable Virtual Ray Lights Rendering for Participating Media}},
author = {
Vibert, Nicolas
 and
Gruson, Adrien
 and
Stokholm, Heine
 and
Mortensen, Troels
 and
Jarosz, Wojciech
 and
Hachisuka, Toshiya
 and
Nowrouzezahrai, Derek
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13770}
}
                
@article{
10.1111:cgf.13771,
journal = {Computer Graphics Forum}, title = {{
Adaptive Temporal Sampling for Volumetric Path Tracing of Medical Data}},
author = {
Martschinke, Jana
 and
Hartnagel, Stefan
 and
Keinert, Benjamin
 and
Engel, Klaus
 and
Stamminger, Marc
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13771}
}
                
@article{
10.1111:cgf.13772,
journal = {Computer Graphics Forum}, title = {{
Real-time Image-based Lighting of Microfacet BRDFs with Varying Iridescence}},
author = {
Kneiphof, Tom
 and
Golla, Tim
 and
Klein, Reinhard
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13772}
}
                
@article{
10.1111:cgf.13773,
journal = {Computer Graphics Forum}, title = {{
Wide Gamut Spectral Upsampling with Fluorescence}},
author = {
Jung, Alisa
 and
Wilkie, Alexander
 and
Hanika, Johannes
 and
Jakob, Wenzel
 and
Dachsbacher, Carsten
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13773}
}
                
@article{
10.1111:cgf.13774,
journal = {Computer Graphics Forum}, title = {{
Analytic Spectral Integration of Birefringence-Induced Iridescence}},
author = {
Steinberg, Shlomi
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13774}
}
                
@article{
10.1111:cgf.13775,
journal = {Computer Graphics Forum}, title = {{
Quantifying the Error of Light Transport Algorithms}},
author = {
Celarek, Adam
 and
Jakob, Wenzel
 and
Wimmer, Michael
 and
Lehtinen, Jaakko
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13775}
}
                
@article{
10.1111:cgf.13776,
journal = {Computer Graphics Forum}, title = {{
Adaptive BRDF-Oriented Multiple Importance Sampling of Many Lights}},
author = {
Liu, Yifan
 and
Xu, Kun
 and
Yan, Ling-Qi
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13776}
}
                
@article{
10.1111:cgf.13777,
journal = {Computer Graphics Forum}, title = {{
Orthogonal Array Sampling for Monte Carlo Rendering}},
author = {
Jarosz, Wojciech
 and
Enayet, Afnan
 and
Kensler, Andrew
 and
Kilpatrick, Charlie
 and
Christensen, Per
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13777}
}
                
@article{
10.1111:cgf.13778,
journal = {Computer Graphics Forum}, title = {{
Distributing Monte Carlo Errors as a Blue Noise in Screen Space by Permuting Pixel Seeds Between Frames}},
author = {
Heitz, Eric
 and
Belcour, Laurent
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13778}
}
                
@article{
10.1111:cgf.13779,
journal = {Computer Graphics Forum}, title = {{
Combining Point and Line Samples for Direct Illumination}},
author = {
Salesin, Katherine
 and
Jarosz, Wojciech
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13779}
}
                
@article{
10.1111:cgf.13780,
journal = {Computer Graphics Forum}, title = {{
Tessellated Shading Streaming}},
author = {
Hladky, Jozef
 and
Seidel, Hans-Peter
 and
Steinberger, Markus
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13780}
}
                
@article{
10.1111:cgf.13781,
journal = {Computer Graphics Forum}, title = {{
Global Illumination Shadow Layers}},
author = {
DESRICHARD, François
 and
Vanderhaeghe, David
 and
PAULIN, Mathias
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13781}
}
                
@article{
10.1111:cgf.13782,
journal = {Computer Graphics Forum}, title = {{
Learned Fitting of Spatially Varying BRDFs}},
author = {
Merzbach, Sebastian
 and
Hermann, Max
 and
Rump, Martin
 and
Klein, Reinhard
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13782}
}
                
@article{
10.1111:cgf.13783,
journal = {Computer Graphics Forum}, title = {{
Deep-learning the Latent Space of Light Transport}},
author = {
Hermosilla, Pedro
 and
Maisch, Sebastian
 and
Ritschel, Tobias
 and
Ropinski, Timo
}, year = {
2019},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13783}
}

Browse

Recent Submissions

Now showing 1 - 20 of 20
  • Item
    Eurographics Symposium on Rendering 2019 - CGF38-4: Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Boubekeur, Tamy; Sen, Pradeep; Boubekeur, Tamy and Sen, Pradeep
  • Item
    Flexible SVBRDF Capture with a Multi-Image Deep Network
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Deschaintre, Valentin; Aittala, Miika; Durand, Fredo; Drettakis, George; Bousseau, Adrien; Boubekeur, Tamy and Sen, Pradeep
    Empowered by deep learning, recent methods for material capture can estimate a spatially-varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization-based approaches. However, a single image is often simply not enough to observe the rich appearance of realworld materials. We present a deep-learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order-independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images - a sweet spot between existing single-image and complex multi-image approaches.
  • Item
    On-Site Example-Based Material Appearance Acquisition
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Lin, Yiming; Peers, Pieter; Ghosh, Abhijeet; Boubekeur, Tamy and Sen, Pradeep
    We present a novel example-based material appearance modeling method suitable for rapid digital content creation. Our method only requires a single HDR photograph of a homogeneous isotropic dielectric exemplar object under known natural illumination. While conventional methods for appearance modeling require prior knowledge on the object shape, our method does not, nor does it recover the shape explicitly, greatly simplifying on-site appearance acquisition to a lightweight photography process suited for non-expert users. As our central contribution, we propose a shape-agnostic BRDF estimation procedure based on binary RGB profile matching.We also model the appearance of materials exhibiting a regular or stationary texture-like appearance, by synthesizing appropriate mesostructure from the same input HDR photograph and a mesostructure exemplar with (roughly) similar features. We believe our lightweight method for on-site shape-agnostic appearance acquisition presents a suitable alternative for a variety of applications that require plausible ''rapid-appearance-modeling''.
  • Item
    Glint Rendering based on a Multiple-Scattering Patch BRDF
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Chermain, Xavier; Claux, Frédéric; Mérillou, Stéphane; Boubekeur, Tamy and Sen, Pradeep
    Rendering materials such as metallic paints, scratched metals and rough plastics requires glint integrators that can capture all micro-specular highlights falling into a pixel footprint, faithfully replicating surface appearance. Specular normal maps can be used to represent a wide range of arbitrary micro-structures. The use of normal maps comes with important drawbacks though: the appearance is dark overall due to back-facing normals and importance sampling is suboptimal, especially when the micro-surface is very rough. We propose a new glint integrator relying on a multiple-scattering patch-based BRDF addressing these issues. To do so, our method uses a modified version of microfacet-based normal mapping [SHHD17] designed for glint rendering, leveraging symmetric microfacets. To model multiple-scattering, we re-introduce the lost energy caused by a perfectly specular, single-scattering formulation instead of using expensive random walks. This reflectance model is the basis of our patch-based BRDF, enabling robust sampling and artifact-free rendering with a natural appearance. Additional calculation costs amount to about 40% in the worst cases compared to previous methods [YHMR16,CCM18].
  • Item
    Microfacet Model Regularization for Robust Light Transport
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Jendersie, Johannes; Grosch, Thorsten; Boubekeur, Tamy and Sen, Pradeep
    Today, Monte Carlo light transport algorithms are used in many applications to render realistic images. Depending on the complexity of the used methods, several light effects can or cannot be found by the sampling process. Especially, specular and smooth glossy surfaces often lead to high noise and missing light effects. Path space regularization provides a solution, improving any sampling algorithm, by modifying the material evaluation code. Previously, Kaplanyan and Dachsbacher [KD13] introduced the concept for pure specular interactions. We extend this idea to the commonly used microfacet models by manipulating the roughness parameter prior to the evaluation. We also show that this kind of regularization requires a change in the MIS weight computation and provide the solution. Finally, we propose two heuristics to adaptively reduce the introduced bias. Using our method, many complex light effects are reproduced and the fidelity of smooth objects is increased. Additionally, if a path was sampleable before, the variance is partially reduced.
  • Item
    Ray Classification for Accelerated BVH Traversal
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Hendrich, Jakub; Pospíšil, Adam; Meister, Daniel; Bittner, Jiří; Boubekeur, Tamy and Sen, Pradeep
    For ray tracing based methods, traversing a hierarchical acceleration data structure takes up a substantial portion of the total rendering time. We propose an additional data structure which cuts off large parts of the hierarchical traversal. We use the idea of ray classification combined with the hierarchical scene representation provided by a bounding volume hierarchy. We precompute short arrays of indices to subtrees inside the hierarchy and use them to initiate the traversal for a given ray class. This arrangement is compact enough to be cache-friendly, preventing the method from negating its traversal gains by excessive memory traffic. The method is easy to use with existing renderers which we demonstrate by integrating it to the PBRT renderer. The proposed technique reduces the number of traversal steps by 42% on average, saving around 15% of time of finding ray-scene intersection on average.
  • Item
    Scalable Virtual Ray Lights Rendering for Participating Media
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Vibert, Nicolas; Gruson, Adrien; Stokholm, Heine; Mortensen, Troels; Jarosz, Wojciech; Hachisuka, Toshiya; Nowrouzezahrai, Derek; Boubekeur, Tamy and Sen, Pradeep
    Virtual ray lights (VRL) are a powerful representation for multiple-scattered light transport in volumetric participating media. While efficient Monte Carlo estimators can importance sample the contribution of a VRL along an entire sensor subpath, render time still scales linearly in the number of VRLs. We present a new scalable hierarchial VRL method that preferentially samples VRLs according to their image contribution. Similar to Lightcuts-based approaches, we derive a tight upper bound on the potential contribution of a VRL that is efficient to compute. Our bound takes into account the sampling probability densities used when estimating VRL contribution. Ours is the first such upper bound formulation, leading to an efficient and scalable rendering technique with only a few intuitive user parameters. We benchmark our approach in scenes with many VRLs, demonstrating improved scalability compared to existing state-of-the-art techniques.
  • Item
    Adaptive Temporal Sampling for Volumetric Path Tracing of Medical Data
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Martschinke, Jana; Hartnagel, Stefan; Keinert, Benjamin; Engel, Klaus; Stamminger, Marc; Boubekeur, Tamy and Sen, Pradeep
    Monte-Carlo path tracing techniques can generate stunning visualizations of medical volumetric data. In a clinical context, such renderings turned out to be valuable for communication, education, and diagnosis. Because a large number of computationally expensive lighting samples is required to converge to a smooth result, progressive rendering is the only option for interactive settings: Low-sampled, noisy images are shown while the user explores the data, and as soon as the camera is at rest the view is progressively refined. During interaction, the visual quality is low, which strongly impedes the user's experience. Even worse, when a data set is explored in virtual reality, the camera is never at rest, leading to constantly low image quality and strong flickering. In this work we present an approach to bring volumetric Monte-Carlo path tracing to the interactive domain by reusing samples over time. To this end, we transfer the idea of temporal antialiasing from surface rendering to volume rendering. We show how to reproject volumetric ray samples even though they cannot be pinned to a particular 3D position, present an improved weighting scheme that makes longer history trails possible, and define an error accumulation method that downweights less appropriate older samples. Furthermore, we exploit reprojection information to adaptively determine the number of newly generated path tracing samples for each individual pixel. Our approach is designed for static, medical data with both volumetric and surface-like structures. It achieves good-quality volumetric Monte-Carlo renderings with only little noise, and is also usable in a VR context.
  • Item
    Real-time Image-based Lighting of Microfacet BRDFs with Varying Iridescence
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Kneiphof, Tom; Golla, Tim; Klein, Reinhard; Boubekeur, Tamy and Sen, Pradeep
    Iridescence is a natural phenomenon that is perceived as gradual color changes, depending on the view and illumination direction. Prominent examples are the colors seen in oil films and soap bubbles. Unfortunately, iridescent effects are particularly difficult to recreate in real-time computer graphics. We present a high-quality real-time method for rendering iridescent effects under image-based lighting. Previous methods model dielectric thin-films of varying thickness on top of an arbitrary micro-facet model with a conducting or dielectric base material, and evaluate the resulting reflectance term, responsible for the iridescent effects, only for a single direction when using real-time image-based lighting. This leads to bright halos at grazing angles and over-saturated colors on rough surfaces, which causes an unnatural appearance that is not observed in ground truth data. We address this problem by taking the distribution of light directions, given by the environment map and surface roughness, into account when evaluating the reflectance term. In particular, our approach prefilters the first and second moments of the light direction, which are used to evaluate a filtered version of the reflectance term. We show that the visual quality of our approach is superior to the ones previously achieved, while having only a small negative impact on performance.
  • Item
    Wide Gamut Spectral Upsampling with Fluorescence
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Jung, Alisa; Wilkie, Alexander; Hanika, Johannes; Jakob, Wenzel; Dachsbacher, Carsten; Boubekeur, Tamy and Sen, Pradeep
    Physically based spectral rendering has become increasingly important in recent years. However, asset textures in such systems are usually still drawn or acquired as RGB tristimulus values. While a number of RGB to spectrum upsampling techniques are available, none of them support upsampling of all colours in the full spectral locus, as it is intrinsically bigger than the gamut of physically valid reflectance spectra. But with display technology moving to increasingly wider gamuts, the ability to achieve highly saturated colours becomes an increasingly important feature. Real materials usually exhibit smooth reflectance spectra, while computationally generated spectra become more blocky as they represent increasingly bright and saturated colours. In print media, plastic or textile design, fluorescent dyes are added to extend the boundaries of the gamut of reflectance spectra. We follow the same approach for rendering: we provide a method which, given an input RGB tristimulus value, automatically provides a mixture of a regular, smooth reflectance spectrum plus a fluorescent part. For highly saturated input colours, the combination yields an improved reconstruction compared to what would be possible relying on a reflectance spectrum alone. At the core of our technique is a simple parametric spectral model for reflectance, excitation, and emission that allows for compact storage and is compatible with texture mapping. The model can then be used as a fluorescent diffuse component in an existing more complex BRDF model. We also provide importance sampling routines for practical application in a path tracer.
  • Item
    Analytic Spectral Integration of Birefringence-Induced Iridescence
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Steinberg, Shlomi; Boubekeur, Tamy and Sen, Pradeep
    Optical phenomena that are only observable in optically anisotropic materials are generally ignored in the computer graphics. However, such optical effects are not restricted to exotic materials and can also be observed with common translucent objects when optical anisotropy is induced, e.g. via mechanical stress. Furthermore accurate prediction and reproduction of those optical effects has important practical applications. We provide a short but complete analysis of the relevant electromagnetic theory of light propagation in optically anisotropic media and derive the full set of formulations required to render birefringent materials. We then present a novel method for spectral integration of refraction and reflection in an anisotropic slab. Our approach allows fast and robust rendering of birefringence-induced iridescence in a physically faithful manner and is applicable to both real-time and offline rendering.
  • Item
    Quantifying the Error of Light Transport Algorithms
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Celarek, Adam; Jakob, Wenzel; Wimmer, Michael; Lehtinen, Jaakko; Boubekeur, Tamy and Sen, Pradeep
    This paper proposes a new methodology for measuring the error of unbiased physically based rendering algorithms. The current state of the art includes mean squared error (MSE) based metrics and visual comparisons of equal-time renderings of competing algorithms. Neither is satisfying as MSE does not describe behavior and can exhibit significant variance, and visual comparisons are inherently subjective. Our contribution is two-fold: First, we propose to compute many short renderings instead of a single long run and use the short renderings to estimate MSE expectation and variance as well as per-pixel standard deviation. An algorithm that achieves good results in most runs, but with occasional outliers is essentially unreliable, which we wish to quantify numerically. We use per-pixel standard deviation to identify problematic lighting effects of rendering algorithms. The second contribution is the error spectrum ensemble (ESE), a tool for measuring the distribution of error over frequencies. The ESE serves two purposes: It reveals correlation between pixels and can be used to detect outliers, which offset the amount of error substantially.
  • Item
    Adaptive BRDF-Oriented Multiple Importance Sampling of Many Lights
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Liu, Yifan; Xu, Kun; Yan, Ling-Qi; Boubekeur, Tamy and Sen, Pradeep
    Many-light rendering is becoming more common and important as rendering goes into the next level of complexity. However, to calculate the illumination under many lights, state of the art algorithms are still far from efficient, due to the separate consideration of light sampling and BRDF sampling. To deal with the inefficiency of many-light rendering, we present a novel light sampling method named BRDF-oriented light sampling, which selects lights based on importance values estimated using the BRDF's contributions. Our BRDF-oriented light sampling method works naturally with MIS, and allows us to dynamically determine the number of samples allocated for different sampling techniques. With our method, we can achieve a significantly faster convergence to the ground truth results, both perceptually and numerically, as compared to previous many-light rendering algorithms.
  • Item
    Orthogonal Array Sampling for Monte Carlo Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Jarosz, Wojciech; Enayet, Afnan; Kensler, Andrew; Kilpatrick, Charlie; Christensen, Per; Boubekeur, Tamy and Sen, Pradeep
    We generalize N-rooks, jittered, and (correlated) multi-jittered sampling to higher dimensions by importing and improving upon a class of techniques called orthogonal arrays from the statistics literature. Renderers typically combine or ''pad'' a collection of lower-dimensional (e.g. 2D and 1D) stratified patterns to form higher-dimensional samples for integration. This maintains stratification in the original dimension pairs, but looses it for all other dimension pairs. For truly multi-dimensional integrands like those in rendering, this increases variance and deteriorates its rate of convergence to that of pure random sampling. Care must therefore be taken to assign the primary dimension pairs to the dimensions with most integrand variation, but this complicates implementations. We tackle this problem by developing a collection of practical, in-place multi-dimensional sample generation routines that stratify points on all t-dimensional and 1-dimensional projections simultaneously. For instance, when t=2, any 2D projection of our samples is a (correlated) multi-jittered point set. This property not only reduces variance, but also simplifies implementations since sample dimensions can now be assigned to integrand dimensions arbitrarily while maintaining the same level of stratification. Our techniques reduce variance compared to traditional 2D padding approaches like PBRT's (0,2) and Stratified samplers, and provide quality nearly equal to state-of-the-art QMC samplers like Sobol and Halton while avoiding their structured artifacts as commonly seen when using a single sample set to cover an entire image. While in this work we focus on constructing finite sampling point sets, we also discuss potential avenues for extending our work to progressive sequences (more suitable for incremental rendering) in the future.
  • Item
    Distributing Monte Carlo Errors as a Blue Noise in Screen Space by Permuting Pixel Seeds Between Frames
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Heitz, Eric; Belcour, Laurent; Boubekeur, Tamy and Sen, Pradeep
    Recent work has shown that distributing Monte Carlo errors as a blue noise in screen space improves the perceptual quality of rendered images. However, obtaining such distributions remains an open problem with high sample counts and highdimensional rendering integrals. In this paper, we introduce a temporal algorithm that aims at overcoming these limitations. Our algorithm is applicable whenever multiple frames are rendered, typically for animated sequences or interactive applications. Our algorithm locally permutes the pixel sequences (represented by their seeds) to improve the error distribution across frames. Our approach works regardless of the sample count or the dimensionality and significantly improves the images in low-varying screen-space regions under coherent motion. Furthermore, it adds negligible overhead compared to the rendering times.
  • Item
    Combining Point and Line Samples for Direct Illumination
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Salesin, Katherine; Jarosz, Wojciech; Boubekeur, Tamy and Sen, Pradeep
    We develop a unified framework for combining point and line samples in direct lighting calculations. While line samples have proven beneficial in a variety of rendering contexts, their application in direct lighting has been limited due to a lack of formulas for evaluating advanced BRDFs along a line and performance tied to the orientation of occluders in the scene. We lift these limitations by elevating line samples to a shared higher-dimensional space with point samples. Our key insight is to separate the probability distribution functions of line samples and points that lie along a line sample. This simple conceptual change allows us to apply multiple importance sampling (MIS) between points and lines, and lines with each other, in order to leverage their respective strengths. We also show how to improve the convergence rate of MIS between points and lines in an unbiased way using a novel discontinuity-smoothing balance heuristic. We verify through a set of rendering experiments that our proposed MISing of points and lines, and lines with each other, reduces variance of the direct lighting estimate while supporting an increased range of BSDFs compared to analytic line integration.
  • Item
    Tessellated Shading Streaming
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Hladky, Jozef; Seidel, Hans-Peter; Steinberger, Markus; Boubekeur, Tamy and Sen, Pradeep
    Presenting high-fidelity 3D content on compact portable devices with low computational power is challenging. Smartphones, tablets and head-mounted displays (HMDs) suffer from thermal and battery-life constraints and thus cannot match the render quality of desktop PCs and laptops. Streaming rendering enables to show high-quality content but can suffer from potentially high latency. We propose an approach to efficiently capture shading samples in object space and packing them into a texture. Streaming this texture to the client, we support temporal frame up-sampling with high fidelity, low latency and high mobility. We introduce two novel sample distribution strategies and a novel triangle representation in the shading atlas space. Since such a system requires dynamic parallelism, we propose an implementation exploiting the power of hardware-accelerated tessellation stages. Our approach allows fast de-coding and rendering of extrapolated views on a client device by using hardwareaccelerated interpolation between shading samples and a set of potentially visible geometry. A comparison to existing shading methods shows that our sample distributions allow better client shading quality than previous atlas streaming approaches and outperforms image-based methods in all relevant aspects.
  • Item
    Global Illumination Shadow Layers
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) DESRICHARD, François; Vanderhaeghe, David; PAULIN, Mathias; Boubekeur, Tamy and Sen, Pradeep
    Computer graphics artists often resort to compositing to rework light effects in a synthetic image without requiring a new render. Shadows are primary subjects of artistic manipulation as they carry important stylistic information while our perception is tolerant with their editing. In this paper we formalize the notion of global shadow, generalizing direct shadow found in previous work to a global illumination context. We define an object's shadow layer as the difference between two altered renders of the scene. A shadow layer contains the radiance lost on the camera film because of a given object. We translate this definition in the theoretical framework of Monte-Carlo integration, obtaining a concise expression of the shadow layer. Building on it, we propose a path tracing algorithm that renders both the original image and any number of shadow layers in a single pass: the user may choose to separate shadows on a per-object and per-light basis, enabling intuitive and decoupled edits.
  • Item
    Learned Fitting of Spatially Varying BRDFs
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Merzbach, Sebastian; Hermann, Max; Rump, Martin; Klein, Reinhard; Boubekeur, Tamy and Sen, Pradeep
    The use of spatially varying reflectance models (SVBRDF) is the state of the art in physically based rendering and the ultimate goal is to acquire them from real world samples. Recently several promising deep learning approaches have emerged that create such models from a few uncalibrated photos, after being trained on synthetic SVBRDF datasets. While the achieved results are already very impressive, the reconstruction accuracy that is achieved by these approaches is still far from that of specialized devices. On the other hand, fitting SVBRDF parameter maps to the gibabytes of calibrated HDR images per material acquired by state of the art high quality material scanners takes on the order of several hours for realistic spatial resolutions. In this paper, we present a first deep learning approach that is capable of producing SVBRDF parameter maps more than two orders of magnitude faster than state of the art approaches, while still providing results of equal quality and generalizing to new materials unseen during the training. This is made possible by training our network on a large-scale database of material scans that we have gathered with a commercially available SVBRDF scanner. In particular, we train a convolutional neural network to map calibrated input images to the 13 parameter maps of an anisotropic Ward BRDF, modified to account for Fresnel reflections, and evaluate the results by comparing the measured images against re-renderings from our SVBRDF predictions. The novel approach is extensively validated on real world data taken from our material database, which we make publicly available under https://cg.cs.uni-bonn.de/svbrdfs/.
  • Item
    Deep-learning the Latent Space of Light Transport
    (The Eurographics Association and John Wiley & Sons Ltd., 2019) Hermosilla, Pedro; Maisch, Sebastian; Ritschel, Tobias; Ropinski, Timo; Boubekeur, Tamy and Sen, Pradeep
    We suggest a method to directly deep-learn light transport, i. e., the mapping from a 3D geometry-illumination-material configuration to a shaded 2D image. While many previous learning methods have employed 2D convolutional neural networks applied to images, we show for the first time that light transport can be learned directly in 3D. The benefit of 3D over 2D is, that the former can also correctly capture illumination effects related to occluded and/or semi-transparent geometry. To learn 3D light transport, we represent the 3D scene as an unstructured 3D point cloud, which is later, during rendering, projected to the 2D output image. Thus, we suggest a two-stage operator comprising a 3D network that first transforms the point cloud into a latent representation, which is later on projected to the 2D output image using a dedicated 3D-2D network in a second step. We will show that our approach results in improved quality in terms of temporal coherence while retaining most of the computational efficiency of common 2D methods. As a consequence, the proposed two stage-operator serves as a valuable extension to modern deferred shading approaches.