Rendering 2023 - Symposium Track

Permanent URI for this collection

Delft, The Netherlands | June 28 - 30, 2023
(Rendering 2023 CGF papers are available here.)
Ray Tracing
Mean Value Caching for Walk on Spheres
Ghada Bakbouk and Pieter Peers
Spectral
A Microfacet Model for Specular Fluorescent Surfaces and Fluorescent Volume Rendering using Quantum Dots
Alexis Benamira and Sumant Pattanaik
NeRF
Floaters No More: Radiance Field Gradient Scaling for Improved Near-Camera Training
Julien Philip and Valentin Deschaintre
Materials
SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions
Behnaz Kavoosighafi, Jeppe Revall Frisvad, Saghi Hajisharif, Jonas Unger, and Ehsan Miandji
Data-driven Pixel Filter Aware MIP Maps for SVBRDFs
Pauli Kemppinen, Miika Aittala, and Jaakko Lehtinen
Patterns and Shadows
Learning Projective Shadow Textures for Neural Rendering of Human Cast Shadows from Silhouettes
Farshad Einabadi, Jean-Yves Guillemaut, and Adrian Hilton
pEt: Direct Manipulation of Differentiable Vector Patterns
Marzia Riso and Fabio Pellacini
FloralSurf: Space-Filling Geodesic Ornaments
Valerio Albano, Filippo Andrea Fanni, Andrea Giachetti, and Fabio Pellacini
Perception
An Inverted Pyramid Acceleration Structure Guiding Foveated Sphere Tracing for Implicit Surfaces in VR
Andreas Polychronakis, George Alex Koulieris, and Katerina Mania
Practical Temporal and Stereoscopic Filtering for Real-time Ray Tracing
Henrik Philippi, Jeppe Revall Frisvad, and Henrik Wann Jensen
Gaze-Contingent Perceptual Level of Detail Prediction
Luca Surace, Cara Tursun, Ufuk Celikcan, and Piotr Didyk
Industry Track
Deep Compositional Denoising on Frame Sequences
Xianyao Zhang, Gerhard Röthlin, Marco Manzi, Markus Gros, and Marios Papas
Fast Procedural Noise By Monte Carlo Sampling
Marcos Fajardo and Matt Pharr

BibTeX (Rendering 2023 - Symposium Track)
@inproceedings{
10.2312:sr.20232013,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, title = {{
Rendering 2023 Symposium Track: Frontmatter}},
author = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, year = {
2023},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-228-8},
DOI = {
10.2312/sr.20232013}
}
@inproceedings{
10.2312:sr.20231120,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, title = {{
Mean Value Caching for Walk on Spheres}},
author = {
Bakbouk, Ghada
 and
Peers, Pieter
}, year = {
2023},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-229-5},
ISBN = {978-3-03868-228-8},
DOI = {
10.2312/sr.20231120}
}
@inproceedings{
10.2312:sr.20231121,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, title = {{
A Microfacet Model for Specular Fluorescent Surfaces and Fluorescent Volume Rendering using Quantum Dots}},
author = {
Benamira, Alexis
 and
Pattanaik, Sumant
}, year = {
2023},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-229-5},
ISBN = {978-3-03868-228-8},
DOI = {
10.2312/sr.20231121}
}
@inproceedings{
10.2312:sr.20231122,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, title = {{
Floaters No More: Radiance Field Gradient Scaling for Improved Near-Camera Training}},
author = {
Philip, Julien
 and
Deschaintre, Valentin
}, year = {
2023},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-229-5},
ISBN = {978-3-03868-228-8},
DOI = {
10.2312/sr.20231122}
}
@inproceedings{
10.2312:sr.20231123,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, title = {{
SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions}},
author = {
Kavoosighafi, Behnaz
 and
Frisvad, Jeppe Revall
 and
Hajisharif, Saghi
 and
Unger, Jonas
 and
Miandji, Ehsan
}, year = {
2023},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-229-5},
ISBN = {978-3-03868-228-8},
DOI = {
10.2312/sr.20231123}
}
@inproceedings{
10.2312:sr.20231124,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, title = {{
Data-driven Pixel Filter Aware MIP Maps for SVBRDFs}},
author = {
Kemppinen, Pauli
 and
Aittala, Miika
 and
Lehtinen, Jaakko
}, year = {
2023},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-229-5},
ISBN = {978-3-03868-228-8},
DOI = {
10.2312/sr.20231124}
}
@inproceedings{
10.2312:sr.20231125,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, title = {{
Learning Projective Shadow Textures for Neural Rendering of Human Cast Shadows from Silhouettes}},
author = {
Einabadi, Farshad
 and
Guillemaut, Jean-Yves
 and
Hilton, Adrian
}, year = {
2023},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-229-5},
ISBN = {978-3-03868-228-8},
DOI = {
10.2312/sr.20231125}
}
@inproceedings{
10.2312:sr.20231127,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, title = {{
FloralSurf: Space-Filling Geodesic Ornaments}},
author = {
Albano, Valerio
 and
Fanni, Filippo Andrea
 and
Giachetti, Andrea
 and
Pellacini, Fabio
}, year = {
2023},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-229-5},
ISBN = {978-3-03868-228-8},
DOI = {
10.2312/sr.20231127}
}
@inproceedings{
10.2312:sr.20231126,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, title = {{
pEt: Direct Manipulation of Differentiable Vector Patterns}},
author = {
Riso, Marzia
 and
Pellacini, Fabio
}, year = {
2023},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-229-5},
ISBN = {978-3-03868-228-8},
DOI = {
10.2312/sr.20231126}
}
@inproceedings{
10.2312:sr.20231128,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, title = {{
An Inverted Pyramid Acceleration Structure Guiding Foveated Sphere Tracing for Implicit Surfaces in VR}},
author = {
Polychronakis, Andreas
 and
Koulieris, George Alex
 and
Mania, Katerina
}, year = {
2023},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-229-5},
ISBN = {978-3-03868-228-8},
DOI = {
10.2312/sr.20231128}
}
@inproceedings{
10.2312:sr.20231129,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, title = {{
Practical Temporal and Stereoscopic Filtering for Real-time Ray Tracing}},
author = {
Philippi, Henrik
 and
Frisvad, Jeppe Revall
 and
Jensen, Henrik Wann
}, year = {
2023},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-229-5},
ISBN = {978-3-03868-228-8},
DOI = {
10.2312/sr.20231129}
}
@inproceedings{
10.2312:sr.20231130,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, title = {{
Gaze-Contingent Perceptual Level of Detail Prediction}},
author = {
Surace, Luca
 and
Tursun, Cara
 and
Celikcan, Ufuk
 and
Didyk, Piotr
}, year = {
2023},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-229-5},
ISBN = {978-3-03868-228-8},
DOI = {
10.2312/sr.20231130}
}
@inproceedings{
10.2312:sr.20231141,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, title = {{
Fast Procedural Noise By Monte Carlo Sampling}},
author = {
Fajardo, Marcos
 and
Pharr, Matt
}, year = {
2023},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-228-8},
DOI = {
10.2312/sr.20231141}
}
@inproceedings{
10.2312:sr.20231142,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, title = {{
Deep Compositional Denoising on Frame Sequences}},
author = {
Zhang, Xianyao
 and
Röthlin, Gerhard
 and
Manzi, Marco
 and
Gross, Markus
 and
Papas, Marios
}, year = {
2023},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-228-8},
DOI = {
10.2312/sr.20231142}
}

Browse

Recent Submissions

Now showing 1 - 14 of 14
  • Item
    Rendering 2023 Symposium Track: Frontmatter
    (The Eurographics Association, 2023) Ritschel, Tobias; Weidlich, Andrea; Ritschel, Tobias; Weidlich, Andrea
  • Item
    Mean Value Caching for Walk on Spheres
    (The Eurographics Association, 2023) Bakbouk, Ghada; Peers, Pieter; Ritschel, Tobias; Weidlich, Andrea
    Walk on Spheres (WoS) is a grid-free Monte Carlo method for numerically estimating solutions for elliptical partial differential equations (PDE) such as the Laplace and Poisson PDEs. While WoS is efficient for computing a solution value at a single evaluation point, it becomes less efficient when the solution is required over a whole domain or a region of interest. WoS computes a solution for each evaluation point separately, possibly recomputing similar sub-walks multiple times over multiple evaluation points. In this paper, we introduce a novel filtering and caching strategy that leverages the volume mean value property (in contrast to the boundary mean value property that forms the core of WoS). In addition, to improve quality under sparse cache regimes, we describe a weighted mean as well as a non-uniform sampling method. Finally, we show that we can reduce the variance within the cache by recursively applying the volume mean value property on the cached elements.
  • Item
    A Microfacet Model for Specular Fluorescent Surfaces and Fluorescent Volume Rendering using Quantum Dots
    (The Eurographics Association, 2023) Benamira, Alexis; Pattanaik, Sumant; Ritschel, Tobias; Weidlich, Andrea
    Fluorescent appearance of materials results from a complex light-material interaction phenomenon. The modeling of fluorescent material for rendering has only been addressed through measurement or for simple diffuse reflections, thus limiting the range of possible representable appearances. In this work, we introduce and model a fluorescent nanoparticle called a Quantum Dot (QD) for rendering. Our modeling of the Quantum Dots serves as a foundation to support two physically based rendering applications. First a fluorescent volumetric scattering model and second, the definition of a fluorescent specular microfacet scattering model. For the latter, we model the Fresnel energy reflection coefficient of a QD coated microfacet assuming specular fluorescence, thus making our approach easily integrable with any microfacet reflection model.
  • Item
    Floaters No More: Radiance Field Gradient Scaling for Improved Near-Camera Training
    (The Eurographics Association, 2023) Philip, Julien; Deschaintre, Valentin; Ritschel, Tobias; Weidlich, Andrea
    NeRF acquisition typically requires careful choice of near planes for the different cameras or suffers from background collapse, creating floating artifacts on the edges of the captured scene. The key insight of this work is that background collapse is caused by a higher density of samples in regions near cameras. As a result of this sampling imbalance, near-camera volumes receive significantly more gradients, leading to incorrect density buildup. We propose a gradient scaling approach to counter-balance this sampling imbalance, removing the need for near planes, while preventing background collapse. Our method can be implemented in a few lines, does not induce any significant overhead, and is compatible with most NeRF implementations.
  • Item
    SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions
    (The Eurographics Association, 2023) Kavoosighafi, Behnaz; Frisvad, Jeppe Revall; Hajisharif, Saghi; Unger, Jonas; Miandji, Ehsan; Ritschel, Tobias; Weidlich, Andrea
    We propose a novel dictionary-based representation learning model for Bidirectional Texture Functions (BTFs) aiming at compact storage, real-time rendering performance, and high image quality. Our model is trained once, using a small training set, and then used to obtain a sparse tensor containing the model parameters. Our technique exploits redundancies in the data across all dimensions simultaneously, as opposed to existing methods that use only angular information and ignore correlations in the spatial domain. We show that our model admits efficient angular interpolation directly in the model space, rather than the BTF space, leading to a notably higher rendering speed than in previous work. Additionally, the high quality-storage cost tradeoff enabled by our method facilitates controlling the image quality, storage cost, and rendering speed using a single parameter, the number of coefficients. Previous methods rely on a fixed number of latent variables for training and testing, hence limiting the potential for achieving a favorable quality-storage cost tradeoff and scalability. Our experimental results demonstrate that our method outperforms existing methods both quantitatively and qualitatively, as well as achieving a higher compression ratio and rendering speed.
  • Item
    Data-driven Pixel Filter Aware MIP Maps for SVBRDFs
    (The Eurographics Association, 2023) Kemppinen, Pauli; Aittala, Miika; Lehtinen, Jaakko; Ritschel, Tobias; Weidlich, Andrea
    We propose a data-driven approach for generating MIP map pyramids from SVBRDF parameter maps. We learn a latent material representation where linear image downsampling corresponds to linear prefiltering of surface reflectance. In contrast to prior work, we explicitly model the effect of the antialiasing pixel filter also at the finest resolution. This yields high-quality results even in images that are shaded only once per pixel with no further processing. The SVBRDF maps produced by our method can be used as drop-in replacements within existing rendering systems, and the data-driven nature of our framework makes it possible to change the shading model with little effort. As a proof of concept, we also demonstrate using a shared latent representation for two different shading models, allowing for automatic conversion
  • Item
    Learning Projective Shadow Textures for Neural Rendering of Human Cast Shadows from Silhouettes
    (The Eurographics Association, 2023) Einabadi, Farshad; Guillemaut, Jean-Yves; Hilton, Adrian; Ritschel, Tobias; Weidlich, Andrea
    This contribution introduces a two-step, novel neural rendering framework to learn the transformation from a 2D human silhouette mask to the corresponding cast shadows on background scene geometries. In the first step, the proposed neural renderer learns a binary shadow texture (canonical shadow) from the 2D foreground subject, for each point light source, independent of the background scene geometry. Next, the generated binary shadows are texture-mapped to transparent virtual shadow map planes which are seamlessly used in a traditional rendering pipeline to project hard or soft shadows for arbitrary scenes and light sources of different sizes. The neural renderer is trained with shadow images rendered from a fast, scalable, synthetic data generation framework. We introduce the 3D Virtual Human Shadow (3DVHshadow) dataset as a public benchmark for training and evaluation of human shadow generation. Evaluation on the 3DVHshadow test set and real 2D silhouette images of people demonstrates the proposed framework achieves comparable performance to traditional geometry-based renderers without any requirement for knowledge or computationally intensive, explicit estimation of the 3D human shape. We also show the benefit of learning intermediate canonical shadow textures, compared to learning to generate shadows directly in camera image space. Further experiments are provided to evaluate the effect of having multiple light sources in the scene, model performance with regard to the relative camera-light 2D angular distance, potential aliasing artefacts related to output image resolution, and effect of light sources' dimensions on shadow softness.
  • Item
    FloralSurf: Space-Filling Geodesic Ornaments
    (The Eurographics Association, 2023) Albano, Valerio; Fanni, Filippo Andrea; Giachetti, Andrea; Pellacini, Fabio; Ritschel, Tobias; Weidlich, Andrea
    We propose a method to generate floral patterns on manifolds without relying on parametrizations. Taking inspiration from the literature on procedural space-filling vegetation, these patterns are made of non-intersecting ornaments that are grown on the surface by repeatedly adding different types of decorative elements, until the whole surface is covered. Each decorative element is defined by a set of geodesic Bézier splines and a set of growth points from which to continue growing the ornaments. Ornaments are grown in a greedy fashion, one decorative element at a time. At each step, we analyze a set of candidates, and retain the one that maximizes surface coverage, while ensuring that it does not intersect other ornaments. All operations in our method are performed in the intrinsic metric of the surface, thus ensuring that the derived decorations have good coverage, with neither distortions nor discontinuities, and can be grown on complex surfaces. In our method, users control the decorations by selecting the size and shape of the decorative elements and the position of the growth points.We demonstrate decorations that vary in the length of the ornaments' lines, and the number, scale and orientation of the placed decorations. We show that these patterns mimic closely the design of hand-drawn objects. Our algorithm supports any manifold surface represented as triangle meshes. In particular, we demonstrate patterns generated on surfaces with high genus, with and without borders and holes, and that can include a mixture of thin and large features.
  • Item
    pEt: Direct Manipulation of Differentiable Vector Patterns
    (The Eurographics Association, 2023) Riso, Marzia; Pellacini, Fabio; Ritschel, Tobias; Weidlich, Andrea
    Procedural assets are used in computer graphics applications since variations can be obtained by changing the parameters of the procedural programs. As the number of parameters increases, editing becomes cumbersome as users have to manually navigate a large space of choices. Many methods in the literature have been proposed to estimate parameters from example images, which works well for initial starting points. For precise edits, inverse manipulation approaches let users manipulate the output asset interactively, while the system determines the procedural parameters. In this work, we focus on editing procedural vector patterns, which are collections of vector primitives generated by procedural programs. Recent work has shown how to estimate procedural parameters from example images and sketches, that we complement here by proposing a method for direct manipulation. In our work, users select and interactively transform a set of shape points, while also constraining other selected points. Our method then optimizes for the best pattern parameters using gradient-based optimization of the differentiable procedural functions. We support edits on large variety of patterns with different shapes, symmetries, continuous and discrete parameters, and with or without occlusions.
  • Item
    An Inverted Pyramid Acceleration Structure Guiding Foveated Sphere Tracing for Implicit Surfaces in VR
    (The Eurographics Association, 2023) Polychronakis, Andreas; Koulieris, George Alex; Mania, Katerina; Ritschel, Tobias; Weidlich, Andrea
    In this paper, we propose a novel rendering pipeline for sphere tracing signed distance functions (SDFs) that significantly improves sphere tracing performance. Previous methods simply focus on over-relaxing the step size by a fixed amount and thus reducing the total step count of the ray based on the error of the previous step at the full rendering resolution. Unlike those, our system reconstructs the final image in a multi-scale inverted pyramid fashion that provides progressively finer approximations of a surface's distance from the camera origin. We initiate sphere tracing at a very low resolution approximation of the scene which provides an initial estimate of the closest surface to a group of rays to be sphere traced. We shoot and trace those rays from that approximated distance instead of shooting them from the camera origin, providing a massive head-start for the rays to leap ahead in the 3D scene, successively generating the following level until the full resolution is reached. This significantly reduces the total step count. Moving up in the pyramid in higher and higher resolutions we repeat this process to further eliminate sphere tracing steps. The multiple resolution levels of the pyramid ascertain that we avoid jumps of the ray in the 3D scene that would potentially generate artefacts, especially around scene edges that might be missed when rendering at lower resolutions. This approach allows for a much more efficient use of computational resources and results in a significant boost in performance (more than 20x speed-up in some cases). Integrating a foveated rendering algorithm within the inverted pyramid pipeline further accelerates performance enabling 16x super-sample anti-aliasing of implicit surfaces in a VR headset. Our experiments demonstrate that our image manipulation remains imperceptible. Our benchmark evaluation indicated a significant boost in sphere tracing performance with or without foveated rendering applied. This enables efficiently rendering SDFs in VR headsets, often otherwise impossible due to limited performance.
  • Item
    Practical Temporal and Stereoscopic Filtering for Real-time Ray Tracing
    (The Eurographics Association, 2023) Philippi, Henrik; Frisvad, Jeppe Revall; Jensen, Henrik Wann; Ritschel, Tobias; Weidlich, Andrea
    We present a practical method for temporal and stereoscopic filtering that generates stereo-consistent rendering. Existing methods for stereoscopic rendering often reuse samples from one eye for the other or do averaging between the two eyes. These approaches fail in the presence of ray tracing effects such as specular reflections and refractions. We derive a new blending strategy that leverages variance to compute per pixel blending weights for both temporal and stereoscopic rendering. In the temporal domain, our method works well in a low noise context and is robust in the presence of inconsistent motion vectors, where existing methods such as temporal anti-aliasing (TAA) and deep learning super sampling (DLSS) produce artifacts. In the stereoscopic domain, our method provides a new way to ensure consistency between the left and right eyes. The stereoscopic version of our method can be used with our new temporal method or with existing methods such as DLSS and TAA. In all combinations, it reduces the error and significantly increases the consistency between the eyes making it practical for real-time settings such as virtual reality (VR).
  • Item
    Gaze-Contingent Perceptual Level of Detail Prediction
    (The Eurographics Association, 2023) Surace, Luca; Tursun, Cara; Celikcan, Ufuk; Didyk, Piotr; Ritschel, Tobias; Weidlich, Andrea
    New virtual reality headsets and wide field-of-view displays rely on foveated rendering techniques that lower the rendering quality for peripheral vision to increase performance without a perceptible quality loss. While the concept is simple, the practical realization of the foveated rendering systems and their full exploitation are still challenging. Existing techniques focus on modulating the spatial resolution of rendering or shading rate according to the characteristics of human perception. However, most rendering systems also have a significant cost related to geometry processing. In this work, we investigate the problem of mesh simplification, also known as the level of detail (LOD) technique, for foveated rendering. We aim to maximize the amount of LOD simplification while keeping the visibility of changes to the object geometry under a selected threshold. We first propose two perceptually inspired visibility models for mesh simplification suitable for gaze-contingent rendering. The first model focuses on spatial distortions in the object silhouette and body. The second model accounts for the temporal visibility of switching between two LODs. We calibrate the two models using data from perceptual experiments and derive a computational method that predicts a suitable LOD for rendering an object at a specific eccentricity without objectionable quality loss. We apply the technique to the foveated rendering of static and dynamic objects and demonstrate the benefits in a validation experiment. Using our perceptually-driven gaze-contingent LOD selection, we achieve up to 33% of extra speedup in rendering performance of complex-geometry scenes when combined with the most recent industrial solutions, i.e., Nanite from Unreal Engine.
  • Item
    Fast Procedural Noise By Monte Carlo Sampling
    (The Eurographics Association, 2023) Fajardo, Marcos; Pharr, Matt; Ritschel, Tobias; Weidlich, Andrea
    Procedural noise functions are widely used in computer graphics as a way to add texture detail to surfaces and volumes. Many noise functions are based on weighted sums that can be expressed in terms of random variables, which makes it possible to compute Monte Carlo estimates of their values at lower cost. Such stochastic noise functions fit naturally into many Monte Carlo estimators already used in rendering. Leveraging the dense image-plane sampling in modern path tracing renderers, we show that stochastic evaluation allows the use of procedural noise at a fraction of its full cost with little additional error.
  • Item
    Deep Compositional Denoising on Frame Sequences
    (The Eurographics Association, 2023) Zhang, Xianyao; Röthlin, Gerhard; Manzi, Marco; Gross, Markus; Papas, Marios; Ritschel, Tobias; Weidlich, Andrea
    Path tracing is the prevalent rendering algorithm in the animated movies and visual effects industry, thanks to its simplicity and ability to render physically plausible lighting effects. However, we must simulate millions of light paths before producing one final image, and error manifests as noise during rendering. In fact, it can take tens or even hundreds of CPU hours on a modern computer to render a plausible frame in a recent animated movie. Movie production and the VFX industry rely on image-based denoising algorithms to ameliorate the rendering cost, which suppresses the noise due to rendering by reusing information in the neighborhood of the pixels both spatially and temporally.