42-Issue 4

Permanent URI for this collection

Rendering 2023 - Symposium Proceedings
Delft, The Netherlands | June 28 - 30, 2023
(Rendering - Symposium Papers track is available here.)
Ray Tracing
Markov Chain Mixture Models for Real-Time Direct Illumination
Addis Dittebrandt, Vincent Schüßler, Johannes Hanika, Sebastian Herholz, and Carsten Dachsbacher
Ray-aligned Occupancy Map Array for Fast Approximate Ray Tracing
Zheng Zeng, Zilin Xu, Lu Wang, Lifan Wu, and Ling-Qi Yan
Neural Rendering
NEnv: Neural Environment Maps for Global Illumination
Carlos Rodriguez-Pardo, Javier Fabre, Elena Garces, and Jorge Lopez-Moreno
Efficient Path-Space Differentiable Volume Rendering With Respect To Shapes
Zihan Yu, Cheng Zhang, Olivier Maury, Christophe Hery, Zhao Dong, and Shuang Zhao
Neural Free-Viewpoint Relighting for Glossy Indirect Illumination
Nithin Raghavan, Yan Xiao, Kai-En Lin, Tiancheng Sun, Sai Bi, Zexiang Xu, Tzu-Mao Li, and Ravi Ramamoorthi
Spectral
One-to-Many Spectral Upsampling of Reflectances and Transmittances
Laurent Belcour, Pascal Barla, and Gaël Guennebaud
A Hyperspectral Space of Skin Tones for Inverse Rendering of Biophysical Skin Properties
Carlos Aliaga, Mengqi Xia, Hao Xie, Adrian Jarabo, Gustav Braun, and Christophe Hery
NeRF
ModalNeRF: Neural Modal Analysis and Synthesis for Free-Viewpoint Navigation in Dynamically Vibrating Scenes
Automne Petitjean, Yohan Poirier-Ginter, Ayush Tewari, Guillaume Cordonnier, and George Drettakis
Materials
Practical Acquisition of Shape and Plausible Appearance of Reflective and Translucent Objects
Arvin Lin, Yiming Lin, and Abhijeet Ghosh
Video and Editing
PVP: Personalized Video Prior for Editable Dynamic Portraits using StyleGAN
Kai-En Lin, Alex Trevithick, Keli Cheng, Michel Sarkis, Mohsen Ghafoorian, Ning Bi, Gerhard Reitmayr, and Ravi Ramamoorthi
Interactive Control over Temporal Consistency while Stylizing Video Streams
Sumit Shekhar, Max Reimann, Moritz Hilscher, Amir Semmo, Jürgen Döllner, and Matthias Trapp
LoCoPalettes: Local Control for Palette-based Image Editing
Cheng-Kang Ted Chao, Jason Klein, Jianchao Tan, Jose Echevarria, and Yotam Gingold
Scatter
Iridescent Water Droplets Beyond Mie Scattering
Mengqi (Mandy) Xia, Bruce Walter, and Steve Marschner
A Practical and Hierarchical Yarn-based Shading Model for Cloth
Junqiu Zhu, Zahra Montazeri, Jean-Marie Aubry, Ling-Qi Yan, and Andrea Weidlich
Accelerating Hair Rendering by Learning High-Order Scattered Radiance
Aakash KT, Adrian Jarabo, Carlos Aliaga, Matt Jen-Yuan Chiang, Olivier Maury, Christophe Hery, P. J. Narayanan, and Giljoo Nam

BibTeX (42-Issue 4)
                
@article{
10.1111:cgf.14896,
journal = {Computer Graphics Forum}, title = {{
Rendering 2023 CGF 42-4: Frontmatter}},
author = {
Ritschel, Tobias
 and
Weidlich, Andrea
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14896}
}
                
@article{
10.1111:cgf.14881,
journal = {Computer Graphics Forum}, title = {{
Markov Chain Mixture Models for Real-Time Direct Illumination}},
author = {
Dittebrandt, Addis
 and
Schüßler, Vincent
 and
Hanika, Johannes
 and
Herholz, Sebastian
 and
Dachsbacher, Carsten
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14881}
}
                
@article{
10.1111:cgf.14882,
journal = {Computer Graphics Forum}, title = {{
Ray-aligned Occupancy Map Array for Fast Approximate Ray Tracing}},
author = {
Zeng, Zheng
 and
Xu, Zilin
 and
Wang, Lu
 and
Wu, Lifan
 and
Yan, Ling-Qi
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14882}
}
                
@article{
10.1111:cgf.14883,
journal = {Computer Graphics Forum}, title = {{
NEnv: Neural Environment Maps for Global Illumination}},
author = {
Rodriguez-Pardo, Carlos
 and
Fabre, Javier
 and
Garces, Elena
 and
Lopez-Moreno, Jorge
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14883}
}
                
@article{
10.1111:cgf.14884,
journal = {Computer Graphics Forum}, title = {{
Efficient Path-Space Differentiable Volume Rendering With Respect To Shapes}},
author = {
Yu, Zihan
 and
Zhang, Cheng
 and
Maury, Olivier
 and
Hery, Christophe
 and
Dong, Zhao
 and
Zhao, Shuang
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14884}
}
                
@article{
10.1111:cgf.14885,
journal = {Computer Graphics Forum}, title = {{
Neural Free-Viewpoint Relighting for Glossy Indirect Illumination}},
author = {
Raghavan, Nithin
 and
Xiao, Yan
 and
Lin, Kai-En
 and
Sun, Tiancheng
 and
Bi, Sai
 and
Xu, Zexiang
 and
Li, Tzu-Mao
 and
Ramamoorthi, Ravi
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14885}
}
                
@article{
10.1111:cgf.14886,
journal = {Computer Graphics Forum}, title = {{
One-to-Many Spectral Upsampling of Reflectances and Transmittances}},
author = {
Belcour, Laurent
 and
Barla, Pascal
 and
Guennebaud, Gaël
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14886}
}
                
@article{
10.1111:cgf.14887,
journal = {Computer Graphics Forum}, title = {{
A Hyperspectral Space of Skin Tones for Inverse Rendering of Biophysical Skin Properties}},
author = {
Aliaga, Carlos
 and
Xia, Mengqi
 and
Xie, Hao
 and
Jarabo, Adrian
 and
Braun, Gustav
 and
Hery, Christophe
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14887}
}
                
@article{
10.1111:cgf.14888,
journal = {Computer Graphics Forum}, title = {{
ModalNeRF: Neural Modal Analysis and Synthesis for Free-Viewpoint Navigation in Dynamically Vibrating Scenes}},
author = {
Petitjean, Automne
 and
Poirier-Ginter, Yohan
 and
Tewari, Ayush
 and
Cordonnier, Guillaume
 and
Drettakis, George
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14888}
}
                
@article{
10.1111:cgf.14889,
journal = {Computer Graphics Forum}, title = {{
Practical Acquisition of Shape and Plausible Appearance of Reflective and Translucent Objects}},
author = {
Lin, Arvin
 and
Lin, Yiming
 and
Ghosh, Abhijeet
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14889}
}
                
@article{
10.1111:cgf.14890,
journal = {Computer Graphics Forum}, title = {{
PVP: Personalized Video Prior for Editable Dynamic Portraits using StyleGAN}},
author = {
Lin, Kai-En
 and
Trevithick, Alex
 and
Cheng, Keli
 and
Sarkis, Michel
 and
Ghafoorian, Mohsen
 and
Bi, Ning
 and
Reitmayr, Gerhard
 and
Ramamoorthi, Ravi
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14890}
}
                
@article{
10.1111:cgf.14891,
journal = {Computer Graphics Forum}, title = {{
Interactive Control over Temporal Consistency while Stylizing Video Streams}},
author = {
Shekhar, Sumit
 and
Reimann, Max
 and
Hilscher, Moritz
 and
Semmo, Amir
 and
Döllner, Jürgen
 and
Trapp, Matthias
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14891}
}
                
@article{
10.1111:cgf.14892,
journal = {Computer Graphics Forum}, title = {{
LoCoPalettes: Local Control for Palette-based Image Editing}},
author = {
Chao, Cheng-Kang Ted
 and
Klein, Jason
 and
Tan, Jianchao
 and
Echevarria, Jose
 and
Gingold, Yotam
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14892}
}
                
@article{
10.1111:cgf.14893,
journal = {Computer Graphics Forum}, title = {{
Iridescent Water Droplets Beyond Mie Scattering}},
author = {
Xia, Mengqi (Mandy)
 and
Walter, Bruce
 and
Marschner, Steve
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14893}
}
                
@article{
10.1111:cgf.14894,
journal = {Computer Graphics Forum}, title = {{
A Practical and Hierarchical Yarn-based Shading Model for Cloth}},
author = {
Zhu, Junqiu
 and
Montazeri, Zahra
 and
Aubry, Jean-Marie
 and
Yan, Ling-Qi
 and
Weidlich, Andrea
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14894}
}
                
@article{
10.1111:cgf.14895,
journal = {Computer Graphics Forum}, title = {{
Accelerating Hair Rendering by Learning High-Order Scattered Radiance}},
author = {
KT, Aakash
 and
Jarabo, Adrian
 and
Aliaga, Carlos
 and
Chiang, Matt Jen-Yuan
 and
Maury, Olivier
 and
Hery, Christophe
 and
Narayanan, P. J.
 and
Nam, Giljoo
}, year = {
2023},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14895}
}

Browse

Recent Submissions

Now showing 1 - 16 of 16
  • Item
    Rendering 2023 CGF 42-4: Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Ritschel, Tobias; Weidlich, Andrea; Ritschel, Tobias; Weidlich, Andrea
  • Item
    Markov Chain Mixture Models for Real-Time Direct Illumination
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Dittebrandt, Addis; Schüßler, Vincent; Hanika, Johannes; Herholz, Sebastian; Dachsbacher, Carsten; Ritschel, Tobias; Weidlich, Andrea
    We present a novel technique to efficiently render complex direct illumination in real-time. It is based on a spatio-temporal randomized mixture model of von Mises-Fisher (vMF) distributions in screen space. For every pixel we determine the vMF distribution to sample from using a Markov chain process which is targeted to capture important features of the integrand. By this we avoid the storage overhead of finite-component deterministic mixture models, for which, in addition, determining the optimal component count is challenging. We use stochastic multiple importance sampling (SMIS) to be independent of the equilibrium distribution of our Markov chain process, since it cancels out in the estimator. Further, we use the same sample to advance the Markov chain and to construct the SMIS estimator and local Markov chain state permutations avoid the resulting bias due to dependent sampling. As a consequence we require one ray per sample and pixel only. We evaluate our technique using implementations in a research renderer as well as a classic game engine with highly dynamic content. Our results show that it is efficient and quickly readapts to dynamic conditions. We compare to spatio-temporal resampling (ReSTIR), which can suffer from correlation artifacts due to its non-adapting candidate distributions that can deviate strongly from the integrand.While we focus on direct illumination, our approach is more widely applicable and we exemplarily show the rendering of caustics.
  • Item
    Ray-aligned Occupancy Map Array for Fast Approximate Ray Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Zeng, Zheng; Xu, Zilin; Wang, Lu; Wu, Lifan; Yan, Ling-Qi; Ritschel, Tobias; Weidlich, Andrea
    We present a new software ray tracing solution that efficiently computes visibilities in dynamic scenes. We first introduce a novel scene representation: ray-aligned occupancy map array (ROMA) that is generated by rasterizing the dynamic scene once per frame. Our key contribution is a fast and low-divergence tracing method computing visibilities in constant time, without constructing and traversing the traditional intersection acceleration data structures such as BVH. To further improve accuracy and alleviate aliasing, we use a spatiotemporal scheme to stochastically distribute the candidate ray samples. We demonstrate the practicality of our method by integrating it into a modern real-time renderer and showing better performance compared to existing techniques based on distance fields (DFs). Our method is free of the typical artifacts caused by incomplete scene information, and is about 2.5×-10× faster than generating and tracing DFs at the same resolution and equal storage.
  • Item
    NEnv: Neural Environment Maps for Global Illumination
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Rodriguez-Pardo, Carlos; Fabre, Javier; Garces, Elena; Lopez-Moreno, Jorge; Ritschel, Tobias; Weidlich, Andrea
    Environment maps are commonly used to represent and compute far-field illumination in virtual scenes. However, they are expensive to evaluate and sample from, limiting their applicability to real-time rendering. Previous methods have focused on compression through spherical-domain approximations, or on learning priors for natural, day-light illumination. These hinder both accuracy and generality, and do not provide the probability information required for importance-sampling Monte Carlo integration. We propose NEnv, a deep-learning fully-differentiable method, capable of compressing and learning to sample from a single environment map. NEnv is composed of two different neural networks: A normalizing flow, able to map samples from uniform distributions to the probability density of the illumination, also providing their corresponding probabilities; and an implicit neural representation which compresses the environment map into an efficient differentiable function. The computation time of environment samples with NEnv is two orders of magnitude less than with traditional methods. NEnv makes no assumptions regarding the content (i.e. natural illumination), thus achieving higher generality than previous learning-based approaches. We share our implementation and a diverse dataset of trained neural environment maps, which can be easily integrated into existing rendering engines.
  • Item
    Efficient Path-Space Differentiable Volume Rendering With Respect To Shapes
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Yu, Zihan; Zhang, Cheng; Maury, Olivier; Hery, Christophe; Dong, Zhao; Zhao, Shuang; Ritschel, Tobias; Weidlich, Andrea
    Differentiable rendering of translucent objects with respect to their shapes has been a long-standing problem. State-of-theart methods require detecting object silhouettes or specifying change rates inside translucent objects-both of which can be expensive for translucent objects with complex shapes. In this paper, we address this problem for translucent objects with no refractive or reflective boundaries. By reparameterizing interior components of differential path integrals, our new formulation does not require change rates to be specified in the interior of objects. Further, we introduce new Monte Carlo estimators based on this formulation that do not require explicit detection of object silhouettes.
  • Item
    Neural Free-Viewpoint Relighting for Glossy Indirect Illumination
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Raghavan, Nithin; Xiao, Yan; Lin, Kai-En; Sun, Tiancheng; Bi, Sai; Xu, Zexiang; Li, Tzu-Mao; Ramamoorthi, Ravi; Ritschel, Tobias; Weidlich, Andrea
    Precomputed Radiance Transfer (PRT) remains an attractive solution for real-time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real-time. However, practical PRT methods are usually limited to low-frequency spherical harmonic lighting. Allfrequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we learn the wavelet transport using a small multi-layer perceptron (MLP) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other MLP inputs. We optimize/learn the feature field (compactly represented by a tensor decomposition) and MLP parameters from multiple images of the scene under different lighting and viewing conditions. We demonstrate real-time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed rendering of challenging scenes involving view-dependent reflections and even caustics.
  • Item
    One-to-Many Spectral Upsampling of Reflectances and Transmittances
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Belcour, Laurent; Barla, Pascal; Guennebaud, Gaël; Ritschel, Tobias; Weidlich, Andrea
    Spectral rendering is essential for the production of physically-plausible synthetic images, but requires to introduce several changes in the content generation pipeline. In particular, the authoring of spectral material properties (e.g., albedo maps, indices of refraction, transmittance coefficients) raises new problems. While a large panel of computer graphics methods exists to upsample a RGB color to a spectrum, they all provide a one-to-one mapping. This limits the ability to control interesting color changes such as the Usambara effect or metameric spectra. In this work, we introduce a one-to-many mapping in which we show how we can explore the set of all spectra reproducing a given input color. We apply this method to different colour changing effects such as vathochromism - the change of color with depth, and metamerism.
  • Item
    A Hyperspectral Space of Skin Tones for Inverse Rendering of Biophysical Skin Properties
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Aliaga, Carlos; Xia, Mengqi; Xie, Hao; Jarabo, Adrian; Braun, Gustav; Hery, Christophe; Ritschel, Tobias; Weidlich, Andrea
    We present a method for estimating the main properties of human skin, leveraging a hyperspectral dataset of skin tones synthetically generated through a biophysical layered skin model and Monte Carlo light transport simulations. Our approach learns the mapping between the skin parameters and diffuse skin reflectance in such space through an encoder-decoder network. We assess the performance of RGB and spectral reflectance up to 1 µm, allowing the model to retrieve visible and near-infrared. Instead of restricting the parameters to values in the ranges reported in medical literature, we allow the model to exceed such ranges to gain expressiveness to recover outliers like beard, eyebrows, rushes and other imperfections. The continuity of our albedo space allows to recover smooth textures of skin properties, enabling reflectance manipulations by meaningful edits of the skin properties. The space is robust under different illumination conditions, and presents high spectral similarity with the current largest datasets of spectral measurements of real human skin while expanding its gamut.
  • Item
    ModalNeRF: Neural Modal Analysis and Synthesis for Free-Viewpoint Navigation in Dynamically Vibrating Scenes
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Petitjean, Automne; Poirier-Ginter, Yohan; Tewari, Ayush; Cordonnier, Guillaume; Drettakis, George; Ritschel, Tobias; Weidlich, Andrea
    Recent advances in Neural Radiance Fields enable the capture of scenes with motion. However, editing the motion is hard; no existing method allows editing beyond the space of motion existing in the original video, nor editing based on physics. We present the first approach that allows physically-based editing of motion in a scene captured with a single hand-held video camera, containing vibrating or periodic motion. We first introduce a Lagrangian representation, representing motion as the displacement of particles, which is learned while training a radiance field. We use these particles to create a continuous representation of motion over the sequence, which is then used to perform a modal analysis of the motion thanks to a Fourier transform on the particle displacement over time. The resulting extracted modes allow motion synthesis, and easy editing of the motion, while inheriting the ability for free-viewpoint synthesis in the captured 3D scene from the radiance field.We demonstrate our new method on synthetic and real captured scenes.
  • Item
    Practical Acquisition of Shape and Plausible Appearance of Reflective and Translucent Objects
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Lin, Arvin; Lin, Yiming; Ghosh, Abhijeet; Ritschel, Tobias; Weidlich, Andrea
    We present a practical method for acquisition of shape and plausible appearance of reflective and translucent objects for realistic rendering and relighting applications. Such objects are extremely challenging to scan with existing capture setups, and have previously required complex lightstage hardware emitting continuous illumination. We instead employ a practical capture setup consisting of a set of desktop LCD screens to illuminate such objects with piece-wise continuous illumination for acquisition. We employ phase-shifted sinusoidal illumination for novel estimation of high quality photometric normals and transmission vector along with diffuse-specular separated reflectance/transmission maps for realistic relighting. We further employ neural in-painting to fill gaps in our measurements caused by gaps in screen illumination, and a novel NeuS-based neural rendering that combines these shape and reflectance maps acquired from multiple viewpoints for high-quality 3D surface geometry reconstruction along with plausible realistic rendering of complex light transport in such objects.
  • Item
    PVP: Personalized Video Prior for Editable Dynamic Portraits using StyleGAN
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Lin, Kai-En; Trevithick, Alex; Cheng, Keli; Sarkis, Michel; Ghafoorian, Mohsen; Bi, Ning; Reitmayr, Gerhard; Ramamoorthi, Ravi; Ritschel, Tobias; Weidlich, Andrea
    Portrait synthesis creates realistic digital avatars which enable users to interact with others in a compelling way. Recent advances in StyleGAN and its extensions have shown promising results in synthesizing photorealistic and accurate reconstruction of human faces. However, previous methods often focus on frontal face synthesis and most methods are not able to handle large head rotations due to the training data distribution of StyleGAN. In this work, our goal is to take as input a monocular video of a face, and create an editable dynamic portrait able to handle extreme head poses. The user can create novel viewpoints, edit the appearance, and animate the face. Our method utilizes pivotal tuning inversion (PTI) to learn a personalized video prior from a monocular video sequence. Then we can input pose and expression coefficients to MLPs and manipulate the latent vectors to synthesize different viewpoints and expressions of the subject. We also propose novel loss functions to further disentangle pose and expression in the latent space. Our algorithm shows much better performance over previous approaches on monocular video datasets, and it is also capable of running in real-time at 54 FPS on an RTX 3080.
  • Item
    Interactive Control over Temporal Consistency while Stylizing Video Streams
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Shekhar, Sumit; Reimann, Max; Hilscher, Moritz; Semmo, Amir; Döllner, Jürgen; Trapp, Matthias; Ritschel, Tobias; Weidlich, Andrea
    Image stylization has seen significant advancement and widespread interest over the years, leading to the development of a multitude of techniques. Extending these stylization techniques, such as Neural Style Transfer (NST), to videos is often achieved by applying them on a per-frame basis. However, per-frame stylization usually lacks temporal consistency, expressed by undesirable flickering artifacts. Most of the existing approaches for enforcing temporal consistency suffer from one or more of the following drawbacks: They (1) are only suitable for a limited range of techniques, (2) do not support online processing as they require the complete video as input, (3) cannot provide consistency for the task of stylization, or (4) do not provide interactive consistency control. Domain-agnostic techniques for temporal consistency aim to eradicate flickering completely but typically disregard aesthetic aspects. For stylization tasks, however, consistency control is an essential requirement as a certain amount of flickering adds to the artistic look and feel. Moreover, making this control interactive is paramount from a usability perspective. To achieve the above requirements, we propose an approach that stylizes video streams in real-time at full HD resolutions while providing interactive consistency control. We develop a lite optical-flow network that operates at 80 Frames per second (FPS) on desktop systems with sufficient accuracy. Further, we employ an adaptive combination of local and global consistency features and enable interactive selection between them. Objective and subjective evaluations demonstrate that our method is superior to state-of-the-art video consistency approaches. maxreimann.github.io/stream-consistency
  • Item
    LoCoPalettes: Local Control for Palette-based Image Editing
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Chao, Cheng-Kang Ted; Klein, Jason; Tan, Jianchao; Echevarria, Jose; Gingold, Yotam; Ritschel, Tobias; Weidlich, Andrea
    Palette-based image editing takes advantage of the fact that color palettes are intuitive abstractions of images. They allow users to make global edits to an image by adjusting a small set of colors. Many algorithms have been proposed to compute color palettes and corresponding mixing weights. However, in many cases, especially in complex scenes, a single global palette may not adequately represent all potential objects of interest. Edits made using a single palette cannot be localized to specific semantic regions. We introduce an adaptive solution to the usability problem based on optimizing RGB palette colors to achieve arbitrary image-space constraints and automatically splitting the image into semantic sub-regions with more representative local palettes when the constraints cannot be satisfied. Our algorithm automatically decomposes a given image into a semantic hierarchy of soft segments. Difficult-to-achieve edits become straightforward with our method. Our results show the flexibility, control, and generality of our method.
  • Item
    Iridescent Water Droplets Beyond Mie Scattering
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Xia, Mengqi (Mandy); Walter, Bruce; Marschner, Steve; Ritschel, Tobias; Weidlich, Andrea
    Looking at a cup of hot tea, an observer can see color patterns and granular textures both on the water surface and in the steam. Motivated by this example, we model the appearance of iridescent water droplets. Mie scattering describes the scattering of light waves by individual spherical particles and is the building block for both effects, but we show that other mechanisms must also be considered in order to faithfully reproduce the appearance. Iridescence on the water surface is caused by droplets levitating above the surface, and interference between light scattered by drops and reflected by the water surface, known as Quetelet scattering, is essential to producing the color. We propose a model, new to computer graphics, for rendering this phenomenon, which we validate against photographs. For iridescent steam, we show that variation in droplet size is essential to the characteristic color patterns. We build a droplet growth model and apply it as a post-processing step to an existing computer graphics fluid simulation to compute collections of particles for rendering. We significantly accelerate the rendering of sparse particles with motion blur by intersecting rays with particle trajectories, blending contributions along viewing rays. Our model reproduces the distinctive color patterns correlated with the steam flow. For both effects, we instantiate individual droplets and render them explicitly, since the granularity of droplets is readily observed in reality, and demonstrate that Mie scattering alone cannot reproduce the visual appearance.
  • Item
    A Practical and Hierarchical Yarn-based Shading Model for Cloth
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhu, Junqiu; Montazeri, Zahra; Aubry, Jean-Marie; Yan, Ling-Qi; Weidlich, Andrea; Ritschel, Tobias; Weidlich, Andrea
    Realistic cloth rendering is a longstanding challenge in computer graphics due to the intricate geometry and hierarchical structure of cloth: Fibers form plies which in turn are combined into yarns which then are woven or knitted into fabrics. Previous fiber-based models have achieved high-quality close-up rendering, but they suffer from high computational cost, which limits their practicality. In this paper, we propose a novel hierarchical model that analytically aggregates light simulation on the fiber level by building on dual-scattering theory. Based on this, we can perform an efficient simulation of ply and yarn shading. Compared to previous methods, our approach is faster and uses less memory while preserving a similar accuracy. We demonstrate both through comparison with existing fiber-based shading models. Our yarn shading model can be applied to curves or surfaces, making it highly versatile for cloth shading. This duality paired with its simplicity and flexibility makes the model particularly useful for film and games production.
  • Item
    Accelerating Hair Rendering by Learning High-Order Scattered Radiance
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) KT, Aakash; Jarabo, Adrian; Aliaga, Carlos; Chiang, Matt Jen-Yuan; Maury, Olivier; Hery, Christophe; Narayanan, P. J.; Nam, Giljoo; Ritschel, Tobias; Weidlich, Andrea
    Efficiently and accurately rendering hair accounting for multiple scattering is a challenging open problem. Path tracing in hair takes long to converge while other techniques are either too approximate while still being computationally expensive or make assumptions about the scene. We present a technique to infer the higher order scattering in hair in constant time within the path tracing framework, while achieving better computational efficiency. Our method makes no assumptions about the scene and provides control over the renderer's bias & speedup. We achieve this by training a small multilayer perceptron (MLP) to learn the higher-order radiance online, while rendering progresses. We describe how to robustly train this network and thoroughly analyze our resulting renderer's characteristics. We evaluate our method on various hairstyles and lighting conditions. We also compare our method against a recent learning based & a traditional real-time hair rendering method and demonstrate better quantitative & qualitative results. Our method achieves a significant improvement in speed with respect to path tracing, achieving a run-time reduction of 40%-70% while only introducing a small amount of bias.