Rendering 2021 - DL-only Track

Permanent URI for this collection

Saarbrücken, Germany & Virtual | 29 June – 2 July 2021
(Rendering 2021 CGF papers are available here.)
Neural Rendering
NeRF-Tex: Neural Reflectance Field Textures
Hendrik Baatz, Jonathan Granskog, Marios Papas, Fabrice Rousselle, and Jan Novák
Integration
Zero-variance Transmittance Estimation
Eugene d'Eon and Jan Novák
Stochastic Generation of (t, s) Sample Sequences
Andrew Helmer, Per Christensen, and Andrew Kensler
Sampling
Sampling Clear Sky Models using Truncated Gaussian Mixtures
Nick Vitsas, Konstantinos Vardis, Georgios Papaioannou
Importance Sampling of Glittering BSDFs based on Finite Mixture Distributions
Xavier Chermain, Basile Sauvage, Jean-Michel Dischler, and Carsten Dachsbacher
Practical Product Sampling for Single Scattering in Media
Keven Villeneuve, Adrien Gruson, Iliyan Georgiev, and Derek Nowrouzezahrai
Image and Video Editing
Semantic-Aware Generative Approach for Image Inpainting
Deepankar Chanda and Nima Khademi Kalantari
Differentiable Rendering
Material and Lighting Reconstruction for Complex Indoor Scenes with Texture-space Differentiable Rendering
Merlin Nimier-David, Zhao Dong, Wenzel Jakob, Anton Kaplanyan
Appearance-Driven Automatic 3D Model Simplification
Jon Hasselgren, Jacob Munkberg, Jaakko Lehtinen, Miika Aittala, and Samuli Laine
High Performance Rendering
Fast Polygonal Splatting using Directional Kernel Difference
Yuji Moroto, Toshiya Hachisuka, and Nobuyuki Umetani
Fast Analytic Soft Shadows from Area Lights
Aakash Kt, Parikshit Sakurikar, and P. J. Narayanan
Path Tracing, Monte Carlo Rendering
Firefly Removal in Monte Carlo Rendering with Adaptive Median of meaNs
Jérôme Buisine, Samuel Delepoulle, and Christophe Renaud
Material Models
Practical Ply-Based Appearance Modeling for Knitted Fabrics
Zahra Montazeri, Søren Gammelmark, Henrik Wann Jensen, and Shuang Zhao
MatMorpher: A Morphing Operator for SVBRDFs
Alban Gauthier, Jean-Marc Thiery, and Tamy Boubekeur
Faces and Body
NeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting
Tiancheng Sun, Kai-En Lin, Sai Bi, Zexiang Xu, and Ravi Ramamoorthi
Single-image Full-body Human Relighting
Manuel Lagunas, Xin Sun, Jimei Yang, Ruben Villegas, Jianming Zhang, Zhixin Shu, Belen Masia, and Diego Gutierrez
Human Hair Inverse Rendering using Multi-View Photometric data
Tiancheng Sun, Giljoo Nam, Carlos Aliaga, Christophe Hery, and Ravi Ramamoorthi
Perception
A Low-Dimensional Perceptual Space for Intuitive BRDF Editing
Weiqi Shi, Zeyu Wang, Cyril Soler, and Holly Rushmeier
Modeling Surround-aware Contrast Sensitivity
Shinyoung Yi, Daniel S. Jeon, Ana Serrano, Se-Yoon Jeong, Hui-Yong Kim, Diego Gutierrez, and Min H. Kim
Spectral Rendering
Moment-based Constrained Spectral Uplifting
Lucia Tódová, Alexander Wilkie, and Luca Fascione
A Compact Representation for Fluorescent Spectral Data
Qingqin Hua, Alban Fichet, and Alexander Wilkie

BibTeX (Rendering 2021 - DL-only Track)
@inproceedings{
10.2312:sr.20211285,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
NeRF-Tex: Neural Reflectance Field Textures}},
author = {
Baatz, Hendrik
 and
Granskog, Jonathan
 and
Papas, Marios
 and
Rousselle, Fabrice
 and
Novák, Jan
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211285}
}
@inproceedings{
10.2312:sr.20211286,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Zero-variance Transmittance Estimation}},
author = {
d'Eon, Eugene
 and
Novák, Jan
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211286}
}
@inproceedings{
10.2312:sr.20211287,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Stochastic Generation of (t, s) Sample Sequences}},
author = {
Helmer, Andrew
 and
Christensen, Per
 and
Kensler, Andrew
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211287}
}
@inproceedings{
10.2312:sr.20211288,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Sampling Clear Sky Models using Truncated Gaussian Mixtures}},
author = {
Vitsas, Nick
 and
Vardis, Konstantinos
 and
Papaioannou, Georgios
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211288}
}
@inproceedings{
10.2312:sr.20211289,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Importance Sampling of Glittering BSDFs based on Finite Mixture Distributions}},
author = {
Chermain, Xavier
 and
Sauvage, Basile
 and
Dischler, Jean-Michel
 and
Dachsbacher, Carsten
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211289}
}
@inproceedings{
10.2312:sr.20211290,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Practical Product Sampling for Single Scattering in Media}},
author = {
Villeneuve, Keven
 and
Gruson, Adrien
 and
Georgiev, Iliyan
 and
Nowrouzezahrai, Derek
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211290}
}
@inproceedings{
10.2312:sr.20211291,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Semantic-Aware Generative Approach for Image Inpainting}},
author = {
Chanda, Deepankar
 and
Kalantari, Nima Khademi
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211291}
}
@inproceedings{
10.2312:sr.20211292,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Material and Lighting Reconstruction for Complex Indoor Scenes with Texture-space Differentiable Rendering}},
author = {
Nimier-David, Merlin
 and
Dong, Zhao
 and
Jakob, Wenzel
 and
Kaplanyan, Anton
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211292}
}
@inproceedings{
10.2312:sr.20211294,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Fast Polygonal Splatting using Directional Kernel Difference}},
author = {
Moroto, Yuji
 and
Hachisuka, Toshiya
 and
Umetani, Nobuyuki
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211294}
}
@inproceedings{
10.2312:sr.20211293,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Appearance-Driven Automatic 3D Model Simplification}},
author = {
Hasselgren, Jon
 and
Munkberg, Jacob
 and
Lehtinen, Jaakko
 and
Aittala, Miika
 and
Laine, Samuli
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211293}
}
@inproceedings{
10.2312:sr.20211296,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Firefly Removal in Monte Carlo Rendering with Adaptive Median of meaNs}},
author = {
Buisine, Jérôme
 and
Delepoulle, Samuel
 and
Renaud, Christophe
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211296}
}
@inproceedings{
10.2312:sr.20211295,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Fast Analytic Soft Shadows from Area Lights}},
author = {
Kt, Aakash
 and
Sakurikar, Parikshit
 and
Narayanan, P. J.
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211295}
}
@inproceedings{
10.2312:sr.20211297,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Practical Ply-Based Appearance Modeling for Knitted Fabrics}},
author = {
Montazeri, Zahra
 and
Gammelmark, Søren
 and
Jensen, Henrik Wann
 and
Zhao, Shuang
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211297}
}
@inproceedings{
10.2312:sr.20211298,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
MatMorpher: A Morphing Operator for SVBRDFs}},
author = {
Gauthier, Alban
 and
Thiery, Jean-Marc
 and
Boubekeur, Tamy
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211298}
}
@inproceedings{
10.2312:sr.20211299,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
NeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting}},
author = {
Sun, Tiancheng
 and
Lin, Kai-En
 and
Bi, Sai
 and
Xu, Zexiang
 and
Ramamoorthi, Ravi
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211299}
}
@inproceedings{
10.2312:sr.20211300,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Single-image Full-body Human Relighting}},
author = {
Lagunas, Manuel
 and
Sun, Xin
 and
Yang, Jimei
 and
Villegas, Ruben
 and
Zhang, Jianming
 and
Shu, Zhixin
 and
Masia, Belen
 and
Gutierrez, Diego
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211300}
}
@inproceedings{
10.2312:sr.20211301,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Human Hair Inverse Rendering using Multi-View Photometric data}},
author = {
Sun, Tiancheng
 and
Nam, Giljoo
 and
Aliaga, Carlos
 and
Hery, Christophe
 and
Ramamoorthi, Ravi
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211301}
}
@inproceedings{
10.2312:sr.20211302,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
A Low-Dimensional Perceptual Space for Intuitive BRDF Editing}},
author = {
Shi, Weiqi
 and
Wang, Zeyu
 and
Soler, Cyril
 and
Rushmeier, Holly
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211302}
}
@inproceedings{
10.2312:sr.20211304,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Moment-based Constrained Spectral Uplifting}},
author = {
Tódová, Lucia
 and
Wilkie, Alexander
 and
Fascione, Luca
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211304}
}
@inproceedings{
10.2312:sr.20211303,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
Modeling Surround-aware Contrast Sensitivity}},
author = {
Yi, Shinyoung
 and
Jeon, Daniel S.
 and
Serrano, Ana
 and
Jeong, Se-Yoon
 and
Kim, Hui-Yong
 and
Gutierrez, Diego
 and
Kim, Min H.
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211303}
}
@inproceedings{
10.2312:sr.20211305,
booktitle = {
Eurographics Symposium on Rendering - DL-only Track},
editor = {
Bousseau, Adrien and McGuire, Morgan
}, title = {{
A Compact Representation for Fluorescent Spectral Data}},
author = {
Hua, Qingqin
 and
Fichet, Alban
 and
Wilkie, Alexander
}, year = {
2021},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-157-1},
DOI = {
10.2312/sr.20211305}
}

Browse

Recent Submissions

Now showing 1 - 22 of 22
  • Item
    Rendering 2021 DL Track: Frontmatter
    (The Eurographics Association, 2021) Bousseau, Adrien; McGuire, Morgan; Bousseau, Adrien and McGuire, Morgan
  • Item
    NeRF-Tex: Neural Reflectance Field Textures
    (The Eurographics Association, 2021) Baatz, Hendrik; Granskog, Jonathan; Papas, Marios; Rousselle, Fabrice; Novák, Jan; Bousseau, Adrien and McGuire, Morgan
    We investigate the use of neural fields for modeling diverse mesoscale structures, such as fur, fabric, and grass. Instead of using classical graphics primitives to model the structure, we propose to employ a versatile volumetric primitive represented by a neural reflectance field (NeRF-Tex), which jointly models the geometry of the material and its response to lighting. The NeRF-Tex primitive can be instantiated over a base mesh to ''texture'' it with the desired meso and microscale appearance. We condition the reflectance field on user-defined parameters that control the appearance. A single NeRF texture thus captures an entire space of reflectance fields rather than one specific structure. This increases the gamut of appearances that can be modeled and provides a solution for combating repetitive texturing artifacts. We also demonstrate that NeRF textures naturally facilitate continuous level-of-detail rendering. Our approach unites the versatility and modeling power of neural networks with the artistic control needed for precise modeling of virtual scenes. While all our training data is currently synthetic, our work provides a recipe that can be further extended to extract complex, hard-to-model appearances from real images.
  • Item
    Zero-variance Transmittance Estimation
    (The Eurographics Association, 2021) d'Eon, Eugene; Novák, Jan; Bousseau, Adrien and McGuire, Morgan
    We apply zero-variance theory to the Volterra integral formulation of volumetric transmittance.We solve for the guided sampling decisions in this framework that produce zero-variance ratio tracking and next-flight ratio tracking estimators. In both cases, a zero-variance estimate arises by colliding only with the null particles along the interval. For ratio tracking, this is equivalent to residual ratio tracking with a perfect control. The next-flight zero-variance estimator is of the collision type and can only produce zero-variance estimates if the random walk never terminates. In drawing these new connections, we enrich the theory of Monte Carlo transmittance estimation and provide a new rigorous path-stretching interpretation of residual ratio tracking.
  • Item
    Stochastic Generation of (t, s) Sample Sequences
    (The Eurographics Association, 2021) Helmer, Andrew; Christensen, Per; Kensler, Andrew; Bousseau, Adrien and McGuire, Morgan
    We introduce a novel method to generate sample sequences that are progressively stratified both in high dimensions and in lower-dimensional projections. Our method comes from a new observation that Owen-scrambled quasi-Monte Carlo (QMC) sequences can be generated as stratified samples, merging the QMC construction and random scrambling into a stochastic algorithm. This yields simpler implementations of Owen-scrambled Sobol', Halton, and Faure sequences that exceed the previous state-of-the-art sample-generation speed; we provide an implementation of Owen-scrambled Sobol' (0,2)-sequences in fewer than 30 lines of C++ code that generates 200 million samples per second on a single CPU thread. Inspired by pmj02bn sequences, this stochastic formulation allows multidimensional sequences to be augmented with best-candidate sampling to improve point spacing in arbitrary projections. We discuss the applications of these high-dimensional sequences to rendering, describe a new method to decorrelate sequences while maintaining their progressive properties, and show that an arbitrary sample coordinate can be queried efficiently. Finally we show how the simplicity and local differentiability of our method allows for further optimization of these sequences. As an example, we improve progressive distances of scrambled Sobol' (0,2)-sequences using a (sub)gradient descent optimizer, which generates sequences with near-optimal distances.
  • Item
    Sampling Clear Sky Models using Truncated Gaussian Mixtures
    (The Eurographics Association, 2021) Vitsas, Nick; Vardis, Konstantinos; Papaioannou, Georgios; Bousseau, Adrien and McGuire, Morgan
    Parametric clear sky models are often represented by simple analytic expressions that can efficiently generate plausible, natural radiance maps of the sky, taking into account expensive and hard to simulate atmospheric phenomena. In this work, we show how such models can be complemented by an equally simple, elegant and generic analytic continuous probability density function (PDF) that provides a very good approximation to the radiance-based distribution of the sky. We describe a fitting process that is used to properly parameterise a truncated Gaussian mixture model, which allows for exact, constant-time and minimal-memory sampling and evaluation of this PDF, without rejection sampling, an important property for practical applications in offline and real-time rendering. We present experiments in a standard importance sampling framework that showcase variance reduction approaching that of a more expensive inversion sampling method using Summed Area Tables.
  • Item
    Importance Sampling of Glittering BSDFs based on Finite Mixture Distributions
    (The Eurographics Association, 2021) Chermain, Xavier; Sauvage, Basile; Dischler, Jean-Michel; Dachsbacher, Carsten; Bousseau, Adrien and McGuire, Morgan
    We propose an importance sampling scheme for the procedural glittering BSDF of Chermain et al. [CSDD20]. Glittering BSDFs have multi-lobe visible normal distribution functions (VNDFs) which are difficult to sample. They are typically sampled using a mono-lobe Gaussian approximation, leading to high variance and fireflies in the rendering. Our method optimally samples the multi-lobe VNDF, leading to lower variance and removing firefly artefacts at equal render time. It allows, for example, the rendering of glittering glass which requires an efficient sampling of the BSDF. The procedural VNDF of Chermain et al. is a finite mixture of tensor products of two 1D tabulated distributions. We sample the visible normals from their VNDF by first drawing discrete variables according to the mixture weights and then sampling the corresponding 1D distributions using the technique of inverse cumulative distribution functions (CDFs). We achieve these goals by tabulating and storing the CDFs, which uses twice the memory as the original work. We prove the optimality of our VNDF sampling and validate our implementation with statistical tests.
  • Item
    Practical Product Sampling for Single Scattering in Media
    (The Eurographics Association, 2021) Villeneuve, Keven; Gruson, Adrien; Georgiev, Iliyan; Nowrouzezahrai, Derek; Bousseau, Adrien and McGuire, Morgan
    Efficient Monte-Carlo estimation of volumetric single scattering remains challenging due to various sources of variance, including transmittance, phase-function anisotropy, geometric cosine foreshortening, and squared-distance fall-off. We propose several complementary techniques to importance sample each of these terms and their product. First, we introduce an extension to equi-angular sampling to analytically account for the foreshortening at point-normal emitters. We then include transmittance and phase function via Taylor-series expansion and/or warp composition. Scaling to complex mesh emitters is achieved through an adaptive tree-splitting scheme. We show improved performance over state-of-the-art baselines in a diversity of scenarios.
  • Item
    Semantic-Aware Generative Approach for Image Inpainting
    (The Eurographics Association, 2021) Chanda, Deepankar; Kalantari, Nima Khademi; Bousseau, Adrien and McGuire, Morgan
    We propose a semantic-aware generative method for image inpainting. Specifically, we divide the inpainting process into two tasks; estimating the semantic information inside the masked areas and inpainting these regions using the semantic information. To effectively utilize the semantic information, we inject them into the generator through conditional feature modulation. Furthermore, we introduce an adversarial framework with dual discriminators to train our generator. In our system, an input consistency discriminator evaluates the inpainted region to best match the surrounding unmasked areas and a semantic consistency discriminator assesses whether the generated image is consistent with the semantic labels. To obtain the complete input semantic map, we first use a pre-trained network to compute the semantic map in the unmasked areas and inpaint it using a network trained in an adversarial manner. We compare our approach against state-of-the-art methods and show significant improvement in the visual quality of the results. Furthermore, we demonstrate the ability of our system to generate user-desired results by allowing a user to manually edit the estimated semantic map.
  • Item
    Material and Lighting Reconstruction for Complex Indoor Scenes with Texture-space Differentiable Rendering
    (The Eurographics Association, 2021) Nimier-David, Merlin; Dong, Zhao; Jakob, Wenzel; Kaplanyan, Anton; Bousseau, Adrien and McGuire, Morgan
    Modern geometric reconstruction techniques achieve impressive levels of accuracy in indoor environments. However, such captured data typically keeps lighting and materials entangled. It is then impossible to manipulate the resulting scenes in photorealistic settings, such as augmented / mixed reality and robotics simulation. Moreover, various imperfections in the captured data, such as missing detailed geometry, camera misalignment, uneven coverage of observations, etc., pose challenges for scene recovery. To address these challenges, we present a robust optimization pipeline based on differentiable rendering to recover physically based materials and illumination, leveraging RGB and geometry captures. We introduce a novel texture-space sampling technique and carefully chosen inductive priors to help guide reconstruction, avoiding low-quality or implausible local minima. Our approach enables robust and high-resolution reconstruction of complex materials and illumination in captured indoor scenes. This enables a variety of applications including novel view synthesis, scene editing, local & global relighting, synthetic data augmentation, and other photorealistic manipulations.
  • Item
    Fast Polygonal Splatting using Directional Kernel Difference
    (The Eurographics Association, 2021) Moroto, Yuji; Hachisuka, Toshiya; Umetani, Nobuyuki; Bousseau, Adrien and McGuire, Morgan
    Depth-of-field (DoF) filtering is an important image-processing task for producing blurred images similar to those obtained with a large aperture camera lens. DoF filtering applies an image convolution with a spatially varying kernel and is thus computationally expensive, even on modern computational hardware. In this paper, we introduce an approach for fast and accurate DoF filtering for polygonal kernels, where the value is constant inside the kernel. Our approach is an extension of the existing approach based on discrete differenced kernels. The performance gain here hinges upon the fact that kernels typically become sparse (i.e., mostly zero) when taking the difference. We extended the existing approach to conventional axis-aligned differences to non-axis-aligned differences. The key insight is that taking such differences along the directions of the edges makes polygonal kernels significantly sparser than just taking the difference along the axis-aligned directions, as in existing studies. Compared to a naive image convolution, we achieve an order of magnitude speedup, allowing a real-time application of polygonal kernels even on high-resolution images.
  • Item
    Appearance-Driven Automatic 3D Model Simplification
    (The Eurographics Association, 2021) Hasselgren, Jon; Munkberg, Jacob; Lehtinen, Jaakko; Aittala, Miika; Laine, Samuli; Bousseau, Adrien and McGuire, Morgan
    We present a suite of techniques for jointly optimizing triangle meshes and shading models to match the appearance of reference scenes. This capability has a number of uses, including appearance-preserving simplification of extremely complex assets, conversion between rendering systems, and even conversion between geometric scene representations. We follow and extend the classic analysis-by-synthesis family of techniques: enabled by a highly efficient differentiable renderer and modern nonlinear optimization algorithms, our results are driven to minimize the image-space difference to the target scene when rendered in similar viewing and lighting conditions. As the only signals driving the optimization are differences in rendered images, the approach is highly general and versatile: it easily supports many different forward rendering models such as normal mapping, spatially-varying BRDFs, displacement mapping, etc. Supervision through images only is also key to the ability to easily convert between rendering systems and scene representations. We output triangle meshes with textured materials to ensure that the models render efficiently on modern graphics hardware and benefit from, e.g., hardware-accelerated rasterization, ray tracing, and filtered texture lookups. Our system is integrated in a small Python code base, and can be applied at high resolutions and on large models. We describe several use cases, including mesh decimation, level of detail generation, seamless mesh filtering and approximations of aggregate geometry.
  • Item
    Firefly Removal in Monte Carlo Rendering with Adaptive Median of meaNs
    (The Eurographics Association, 2021) Buisine, Jérôme; Delepoulle, Samuel; Renaud, Christophe; Bousseau, Adrien and McGuire, Morgan
    Estimating the rendering equation using Monte Carlo methods produces photorealistic images by evaluating a large number of samples of the rendering equation per pixel. The final value for each pixel is then calculated as the average of the contribution of each sample. The mean is a good estimator, but not necessarily robust which explains the appearance of some visual artifacts such as fireflies, due to an overestimation of the value of the mean. The MoN (Median of meaNs) is a more robust estimator than the mean which allows to reduce the impact of outliers which are the cause of these fireflies. However, this method converges more slowly than the mean, which reduces its interest for pixels whose distribution does not contain outliers. To overcome this problem we propose an extension of the MoN based on the Gini coefficient in order to exploit the best of the two estimators during the computation. This approach is simple to implement whatever the integrator and does not require complex parameterization. Finally, it presents a reduced computational overhead and leads to the disappearance of fireflies.
  • Item
    Fast Analytic Soft Shadows from Area Lights
    (The Eurographics Association, 2021) Kt, Aakash; Sakurikar, Parikshit; Narayanan, P. J.; Bousseau, Adrien and McGuire, Morgan
    In this paper, we present the first method to analytically compute shading and soft shadows for physically based BRDFs from arbitrary area lights.We observe that for a given shading point, shadowed radiance can be computed by analytically integrating over the visible portion of the light source using Linearly Transformed Cosines (LTCs). We present a structured approach to project, re-order and horizon-clip spherical polygons of arbitrary lights and occluders. The visible portion is then computed by multiple repetitive set difference operations. Our method produces noise-free shading and soft-shadows and outperforms raytracing within the same compute budget. We further optimize our algorithm for convex light and occluder meshes by projecting the silhouette edges as viewed from the shading point to a spherical polygon, and performing one set difference operation thereby achieving a speedup of more than 2x. We analyze the run-time performance of our method and show rendering results on several scenes with multiple light sources and complex occluders. We demonstrate superior results compared to prior work that uses analytic shading with stochastic shadow computation for area lights.
  • Item
    Practical Ply-Based Appearance Modeling for Knitted Fabrics
    (The Eurographics Association, 2021) Montazeri, Zahra; Gammelmark, Søren; Jensen, Henrik Wann; Zhao, Shuang; Bousseau, Adrien and McGuire, Morgan
    Abstract Modeling the geometry and the appearance of knitted fabrics has been challenging due to their complex geometries and interactions with light. Previous surface-based models have difficulties capturing fine-grained knit geometries; Micro-appearance models, on the other hands, typically store individual cloth fibers explicitly and are expensive to be generated and rendered. Further, neither of the models offers the flexibility to accurately capture both the reflection and the transmission of light simultaneously. In this paper, we introduce an efficient technique to generate knit models with user-specified knitting patterns. Our model stores individual knit plies with fiber-level detailed depicted using normal and tangent mapping. We evaluate our generated models using a wide array of knitting patterns. Further, we compare qualitatively renderings to our models to photos of real samples.
  • Item
    MatMorpher: A Morphing Operator for SVBRDFs
    (The Eurographics Association, 2021) Gauthier, Alban; Thiery, Jean-Marc; Boubekeur, Tamy; Bousseau, Adrien and McGuire, Morgan
    We present a novel morphing operator for spatially-varying bidirectional reflectance distribution functions. Our operator takes as input digital materials modeled using a set of 2D texture maps which control the typical parameters of a standard BRDF model. It also takes an interpolation map, defined over the same texture domain, which modulates the interpolation at each texel of the material. Our algorithm is based on a transport mechanism which continuously transforms the individual source maps into their destination counterparts in a feature-sensitive manner. The underlying non-rigid deformation is computed using an energy minimization over a transport grid and accounts for the user-selected dominant features present in the input materials. During this process, we carefully preserve details by mixing the material channels using a histogram-aware color blending combined with a normal reorientation. As a result, our method allows to explore large regions of the space of possible materials using exemplars as anchors and our interpolation scheme as a navigation mean. We also give details about our real time implementation, designed to map faithfully to the standard physically-based rendering workflow and letting users rule interactively the morphing process.
  • Item
    NeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting
    (The Eurographics Association, 2021) Sun, Tiancheng; Lin, Kai-En; Bi, Sai; Xu, Zexiang; Ramamoorthi, Ravi; Bousseau, Adrien and McGuire, Morgan
    Human portraits exhibit various appearances when observed from different views under different lighting conditions. We can easily imagine how the face will look like in another setup, but computer algorithms still fail on this problem given limited observations. To this end, we present a system for portrait view synthesis and relighting: given multiple portraits, we use a neural network to predict the light-transport field in 3D space, and from the predicted Neural Light-transport Field (NeLF) produce a portrait from a new camera view under a new environmental lighting. Our system is trained on a large number of synthetic models, and can generalize to different synthetic and real portraits under various lighting conditions. Our method achieves simultaneous view synthesis and relighting given multi-view portraits as the input, and achieves state-of-the-art results.
  • Item
    Single-image Full-body Human Relighting
    (The Eurographics Association, 2021) Lagunas, Manuel; Sun, Xin; Yang, Jimei; Villegas, Ruben; Zhang, Jianming; Shu, Zhixin; Masia, Belen; Gutierrez, Diego; Bousseau, Adrien and McGuire, Morgan
    We present a single-image data-driven method to automatically relight images with full-body humans in them. Our framework is based on a realistic scene decomposition leveraging precomputed radiance transfer (PRT) and spherical harmonics (SH) lighting. In contrast to previous work, we lift the assumptions on Lambertian materials and explicitly model diffuse and specular reflectance in our data. Moreover, we introduce an additional light-dependent residual term that accounts for errors in the PRTbased image reconstruction. We propose a new deep learning architecture, tailored to the decomposition performed in PRT, that is trained using a combination of L1, logarithmic, and rendering losses. Our model outperforms the state of the art for full-body human relighting both with synthetic images and photographs.
  • Item
    Human Hair Inverse Rendering using Multi-View Photometric data
    (The Eurographics Association, 2021) Sun, Tiancheng; Nam, Giljoo; Aliaga, Carlos; Hery, Christophe; Ramamoorthi, Ravi; Bousseau, Adrien and McGuire, Morgan
    We introduce a hair inverse rendering framework to reconstruct high-fidelity 3D geometry of human hair, as well as its reflectance, which can be readily used for photorealistic rendering of hair. We take multi-view photometric data as input, i.e., a set of images taken from various viewpoints and different lighting conditions. Our method consists of two stages. First, we propose a novel solution for line-based multi-view stereo that yields accurate hair geometry from multi-view photometric data. Specifically, a per-pixel lightcode is proposed to efficiently solve the hair correspondence matching problem. Our new solution enables accurate and dense strand reconstruction from a smaller number of cameras compared to the state-of-the-art work. In the second stage, we estimate hair reflectance properties using multi-view photometric data. A simplified BSDF model of hair strands is used for realistic appearance reproduction. Based on the 3D geometry of hair strands, we fit the longitudinal roughness and find the single strand color. We show that our method can faithfully reproduce the appearance of human hair and provide realism for digital humans. We demonstrate the accuracy and efficiency of our method using photorealistic synthetic hair rendering data.
  • Item
    A Low-Dimensional Perceptual Space for Intuitive BRDF Editing
    (The Eurographics Association, 2021) Shi, Weiqi; Wang, Zeyu; Soler, Cyril; Rushmeier, Holly; Bousseau, Adrien and McGuire, Morgan
    Understanding and characterizing material appearance based on human perception is challenging because of the highdimensionality and nonlinearity of reflectance data. We refer to the process of identifying specific characteristics of material appearance within the same category as material estimation, in contrast to material categorization which focuses on identifying inter-category differences [FNG15]. In this paper, we present a method to simulate the material estimation process based on human perception. We create a continuous perceptual space for measured tabulated data based on its underlying low-dimensional manifold. Unlike many previous works that only address individual perceptual attributes (such as gloss), we focus on extracting all possible dimensions that can explain the perceived differences between appearances. Additionally, we propose a new material editing interface that combines image navigation and sliders to visualize each perceptual dimension and facilitate the editing of tabulated BRDFs. We conduct a user study to evaluate the efficacy of the perceptual space and the interface in terms of appearance matching.
  • Item
    Moment-based Constrained Spectral Uplifting
    (The Eurographics Association, 2021) Tódová, Lucia; Wilkie, Alexander; Fascione, Luca; Bousseau, Adrien and McGuire, Morgan
    Spectral rendering is increasingly used in appearance-critical rendering workflows due to its ability to predict colour values under varying illuminants. However, directly modelling assets via input of spectral data is a tedious process: and if asset appearance is defined via artist-created textures, these are drawn in colour space, i.e. RGB. Converting these RGB values to equivalent spectral representations is an ambiguous problem, for which robust techniques have been proposed only comparatively recently. However, other than the resulting RGB values matching under the illuminant the RGB space is defined for (usually D65), these uplifting techniques do not provide the user with further control over the resulting spectral shape. We propose a method for constraining the spectral uplifting process so that for a finite number of input spectra that need to be preserved, it always yields the correct uplifted spectrum for the corresponding RGB value. Due to constraints placed on the uplifting process, target RGB values that are in close proximity to one another uplift to spectra within the same metameric family, so that textures with colour variations can be meaningfully uplifted. Renderings uplifted via our method show minimal discrepancies when compared to the original objects.
  • Item
    Modeling Surround-aware Contrast Sensitivity
    (The Eurographics Association, 2021) Yi, Shinyoung; Jeon, Daniel S.; Serrano, Ana; Jeong, Se-Yoon; Kim, Hui-Yong; Gutierrez, Diego; Kim, Min H.; Bousseau, Adrien and McGuire, Morgan
    Despite advances in display technology, many existing applications rely on psychophysical datasets of human perception gathered using older, sometimes outdated displays. As a result, there exists the underlying assumption that such measurements can be carried over to the new viewing conditions of more modern technology. We have conducted a series of psychophysical experiments to explore contrast sensitivity using a state-of-the-art HDR display, taking into account not only the spatial frequency and luminance of the stimuli but also their surrounding luminance levels. From our data, we have derived a novel surroundaware contrast sensitivity function (CSF), which predicts human contrast sensitivity more accurately. We additionally provide a practical version that retains the benefits of our full model, while enabling easy backward compatibility and consistently producing good results across many existing applications that make use of CSF models. We show examples of effective HDR video compression using a transfer function derived from our CSF, tone-mapping, and improved accuracy in visual difference prediction.
  • Item
    A Compact Representation for Fluorescent Spectral Data
    (The Eurographics Association, 2021) Hua, Qingqin; Fichet, Alban; Wilkie, Alexander; Bousseau, Adrien and McGuire, Morgan
    We propose a technique to efficiently importance sample and store fluorescent spectral data. Fluorescence behaviour is properly represented as a re-radiation matrix: for a given input wavelength, this matrix indicates how much energy is re-emitted at all other wavelengths. However, such a 2D representation has a significant memory footprint, especially when a scene contains a high number of fluorescent objects, or fluorescent textures. We propose to use Gaussian Mixture Domain to model re-radiation, which allows us to significantly reduce the memory footprint. Instead of storing the full matrix, we work with a set of Gaussian parameters that also allow direct importance sampling. When accuracy is a concern, one can still use the re-radiation matrix data, and just benefit from importance sampling provided by the Gaussian Mixture. Our method is useful when numerous fluorescent materials are present in a scene, an in particular for textures with fluorescent components.