Rendering 2022 - Symposium Track

Permanent URI for this collection

Prague, Czech Republic & Virtual | July 4th - July 6th 2022
(Rendering 2022 CGF papers are available here.)
Lighting
Spectral Upsampling Approaches for RGB Illumination
Giuseppe Claudio Guarnera, Yuliya Gitlina, Valentin Deschaintre, and Abhijeet Ghosh
SkyGAN: Towards Realistic Cloud Imagery for Image Based Lighting
Martin Mirbauer, Tobias Rittig, Tomáš Iser, Jaroslav Krivánek, and Elena Šikudová
Sampling
Planetary Shadow-Aware Distance Sampling
Carl Breyer and Tobias Zirr
Volume Rendering
Stenciled Volumetric Ambient Occlusion
Felix Brüll, René Kern, and Thorsten Grosch
Material Modeling and Measurement
Polarization-imaging Surface Reflectometry using Near-field Display
Emilie Nogue, Yiming Lin, and Abhijeet Ghosh
Neural Rendering
A Learned Radiance-Field Representation for Complex Luminaires
Jorge Condor and Adrián Jarabo
NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field
Zhong Li, Liangchen Song, Celong Liu, Junsong Yuan, and Yi Xu
High Performance Rendering
A Real-Time Adaptive Ray Marching Method for Particle-Based Fluid Surface Reconstruction
Tong Wu, Zhiqiang Zhou, Anlan Wang, Yuning Gong, and Yanci Zhang
Precomputed Discrete Visibility Fields for Real-Time Ray-Traced Environment Lighting
Yang Xu, Yuanfa Jiang, Chenhao Wang, Kang Li, Pengbo Zhou, and Guohua Geng
Stylized Rendering
GPU-Driven Real-Time Mesh Contour Vectorization
Wangziwei Jiang, Guiqing Li, Yongwei Nie, and Chuhua Xian
HedcutDrawings: Rendering Hedcut Style Portraits
Karelia Pena-Pena and Gonzalo R. Arce
Patterns and Noises
Spatiotemporal Blue Noise Masks
Alan Wolfe, Nathan Morrical, Tomas Akenine-Möller, and Ravi Ramamoorthi
Industry Track Papers
Multi-Fragment Rendering for Glossy Bounces on the GPU
Atsushi Yoshimura, Yusuke Tokuyoshi, and Takahiro Harada
Generalized Decoupled and Object Space Shading System
Daniel Baker and Mark Jarzynski
GAN-based Defect Image Generation for Imbalanced Defect Classification of OLED panels
Yongmoon Jeon, Haneol Kim, Hyeona Lee, Seonghoon Jo, and Jaewon Kim

BibTeX (Rendering 2022 - Symposium Track)
@inproceedings{
10.2312:sr.20222013,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
Rendering 2022 Symposium Track: Frontmatter}},
author = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20222013}
}
@inproceedings{
10.2312:sr.20221150,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
Spectral Upsampling Approaches for RGB Illumination}},
author = {
Guarnera, Giuseppe Claudio
 and
Gitlina, Yuliya
 and
Deschaintre, Valentin
 and
Ghosh, Abhijeet
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20221150}
}
@inproceedings{
10.2312:sr.20221151,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
SkyGAN: Towards Realistic Cloud Imagery for Image Based Lighting}},
author = {
Mirbauer, Martin
 and
Rittig, Tobias
 and
Iser, Tomáš
 and
Krivánek, Jaroslav
 and
Šikudová, Elena
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20221151}
}
@inproceedings{
10.2312:sr.20221152,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
Planetary Shadow-Aware Distance Sampling}},
author = {
Breyer, Carl
 and
Zirr, Tobias
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20221152}
}
@inproceedings{
10.2312:sr.20221153,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
Stenciled Volumetric Ambient Occlusion}},
author = {
Brüll, Felix
 and
Kern, René
 and
Grosch, Thorsten
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20221153}
}
@inproceedings{
10.2312:sr.20221154,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
Polarization-imaging Surface Reflectometry using Near-field Display}},
author = {
Nogue, Emilie
 and
Lin, Yiming
 and
Ghosh, Abhijeet
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20221154}
}
@inproceedings{
10.2312:sr.20221156,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field}},
author = {
Li, Zhong
 and
Song, Liangchen
 and
Liu, Celong
 and
Yuan, Junsong
 and
Xu, Yi
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20221156}
}
@inproceedings{
10.2312:sr.20221155,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
A Learned Radiance-Field Representation for Complex Luminaires}},
author = {
Condor, Jorge
 and
Jarabo, Adrián
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20221155}
}
@inproceedings{
10.2312:sr.20221157,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
A Real-Time Adaptive Ray Marching Method for Particle-Based Fluid Surface Reconstruction}},
author = {
Wu, Tong
 and
Zhou, Zhiqiang
 and
Wang, Anlan
 and
Gong, Yuning
 and
Zhang, Yanci
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20221157}
}
@inproceedings{
10.2312:sr.20221158,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
Precomputed Discrete Visibility Fields for Real-Time Ray-Traced Environment Lighting}},
author = {
Xu, Yang
 and
Jiang, Yuanfa
 and
Wang, Chenhao
 and
Li, Kang
 and
Zhou, Pengbo
 and
Geng, Guohua
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20221158}
}
@inproceedings{
10.2312:sr.20221159,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
GPU-Driven Real-Time Mesh Contour Vectorization}},
author = {
Jiang, Wangziwei
 and
Li, Guiqing
 and
Nie, Yongwei
 and
Xian, Chuhua
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20221159}
}
@inproceedings{
10.2312:sr.20221160,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
HedcutDrawings: Rendering Hedcut Style Portraits}},
author = {
Pena-Pena, Karelia
 and
Arce, Gonzalo R.
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20221160}
}
@inproceedings{
10.2312:sr.20221161,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
Spatiotemporal Blue Noise Masks}},
author = {
Wolfe, Alan
 and
Morrical, Nathan
 and
Akenine-Möller, Tomas
 and
Ramamoorthi, Ravi
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20221161}
}
@inproceedings{
10.2312:sr.20221164,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
GAN-based Defect Image Generation for Imbalanced Defect Classification of OLED panels}},
author = {
Jeon, Yongmoon
 and
Kim, Haneol
 and
Lee, Hyeona
 and
Jo, Seonghoon
 and
Kim, Jaewon
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20221164}
}
@inproceedings{
10.2312:sr.20221163,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
Generalized Decoupled and Object Space Shading System}},
author = {
Baker, Daniel
 and
Jarzynski, Mark
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20221163}
}
@inproceedings{
10.2312:sr.20221162,
booktitle = {
Eurographics Symposium on Rendering},
editor = {
Ghosh, Abhijeet
 and
Wei, Li-Yi
}, title = {{
Multi-Fragment Rendering for Glossy Bounces on the GPU}},
author = {
Yoshimura, Atsushi
 and
Tokuyoshi, Yusuke
 and
Harada, Takahiro
}, year = {
2022},
publisher = {
The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-187-8},
DOI = {
10.2312/sr.20221162}
}

Browse

Recent Submissions

Now showing 1 - 16 of 16
  • Item
    Rendering 2022 Symposium Track: Frontmatter
    (The Eurographics Association, 2022) Ghosh, Abhijeet; Wei, Li-Yi; Ghosh, Abhijeet; Wei, Li-Yi
  • Item
    Spectral Upsampling Approaches for RGB Illumination
    (The Eurographics Association, 2022) Guarnera, Giuseppe Claudio; Gitlina, Yuliya; Deschaintre, Valentin; Ghosh, Abhijeet; Ghosh, Abhijeet; Wei, Li-Yi
    We present two practical approaches for high fidelity spectral upsampling of previously recorded RGB illumination in the form of an image-based representation such as an RGB light probe. Unlike previous approaches that require multiple measurements with a spectrometer or a reference color chart under a target illumination environment, our method requires no additional information for the spectral upsampling step. Instead, we construct a data-driven basis of spectral distributions for incident illumination from a set of six RGBW LEDs (three narrowband and three broadband) that we employ to represent a given RGB color using a convex combination of the six basis spectra. We propose two different approaches for estimating the weights of the convex combination using – (a) genetic algorithm, and (b) neural networks. We additionally propose a theoretical basis consisting of a set of narrow and broad Gaussians as a generalization of the approach, and also evaluate an alternate LED basis for spectral upsampling. We achieve good qualitative matches of the predicted illumination spectrum using our spectral upsampling approach to ground truth illumination spectrum while achieving near perfect matching of the RGB color of the given illumination in the vast majority of cases. We demonstrate that the spectrally upsampled RGB illumination can be employed for various applications including improved lighting reproduction as well as more accurate spectral rendering.
  • Item
    SkyGAN: Towards Realistic Cloud Imagery for Image Based Lighting
    (The Eurographics Association, 2022) Mirbauer, Martin; Rittig, Tobias; Iser, Tomáš; Krivánek, Jaroslav; Šikudová, Elena; Ghosh, Abhijeet; Wei, Li-Yi
    Achieving photorealism when rendering virtual scenes in movies or architecture visualizations often depends on providing a realistic illumination and background. Typically, spherical environment maps serve both as a natural light source from the Sun and the sky, and as a background with clouds and a horizon. In practice, the input is either a static high-resolution HDR photograph manually captured on location in real conditions, or an analytical clear sky model that is dynamic, but cannot model clouds. Our approach bridges these two limited paradigms: a user can control the sun position and cloud coverage ratio, and generate a realistically looking environment map for these conditions. It is a hybrid data-driven analytical model based on a modified state-of-the-art GAN architecture, which is trained on matching pairs of physically-accurate clear sky radiance and HDR fisheye photographs of clouds. We demonstrate our results on renders of outdoor scenes under varying time, date, and cloud covers.
  • Item
    Planetary Shadow-Aware Distance Sampling
    (The Eurographics Association, 2022) Breyer, Carl; Zirr, Tobias; Ghosh, Abhijeet; Wei, Li-Yi
    Dusk and dawn scenes have been difficult for brute force path tracers to handle. We identify that a major source of the inefficiency in explicitly path tracing the atmosphere in such conditions stems from wasting samples on the denser lower parts of atmosphere that get shadowed by the planet before the upper, thinner parts when the star sets below the horizon. We present a technique that overcomes this issue by sampling the star only from the unshadowed segments along rays based on boundaries found by intersecting a cylinder fit to the planet's shadow. We also sample the transmittance by mapping the distances of the boundaries to opacities and sampling the visible segments uniformly in opacity space. Our technique can achieve similar quality compared to brute-force path tracing at around a 60th of the time in such conditions.
  • Item
    Stenciled Volumetric Ambient Occlusion
    (The Eurographics Association, 2022) Brüll, Felix; Kern, René; Grosch, Thorsten; Ghosh, Abhijeet; Wei, Li-Yi
    Screen-space Ambient Occlusion (AO) is commonly used in games to calculate the exposure of each pixel to ambient lighting with the help of a depth buffer. Due to its screen-space nature, it suffers from several artifacts if depth information is missing. Stochastic-Depth AO [VSE21] was introduced to minimize the probability of missing depth information, however, rendering a full stochastic depth map can be very expensive. We introduce a novel rendering pipeline for AO that divides the AO pass into two phases, which allows us to create a stochastic depth map for only a subset of pixels, in order to decrease the rendering time drastically. We also introduce a variant that replaces the stochastic depth map with ray tracing that has competitive performance. Our AO variants are based on Volumetric AO [LS10], which produces similar effects compared to the commonly used horizon-based AO [BSD08], but requires less texture samples to produce good results.
  • Item
    Polarization-imaging Surface Reflectometry using Near-field Display
    (The Eurographics Association, 2022) Nogue, Emilie; Lin, Yiming; Ghosh, Abhijeet; Ghosh, Abhijeet; Wei, Li-Yi
    We present a practical method for measurement of spatially varying isotropic surface reflectance of planar samples using a combination of single-view polarization imaging and near-field display illumination. Unlike previous works that have required multiview imaging or more complex polarization measurements, our method requires only three linear polarizer measurements from a single viewpoint for estimating diffuse and specular albedo and spatially varying specular roughness. We obtain highquality estimate of the surface normal with two additional polarized measurements under a gradient illumination pattern. Our approach enables high-quality renderings of planar surfaces while reducing measurements to a near-optimal number for the estimated SVBRDF parameters.
  • Item
    NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field
    (The Eurographics Association, 2022) Li, Zhong; Song, Liangchen; Liu, Celong; Yuan, Junsong; Xu, Yi; Ghosh, Abhijeet; Wei, Li-Yi
    In this paper, we present an efficient and robust deep learning solution for novel view synthesis of complex scenes. In our approach, a 3D scene is represented as a light field, i.e., a set of rays, each of which has a corresponding color when reaching the image plane. For efficient novel view rendering, we adopt a two-plane parameterization of the light field, where each ray is characterized by a 4D parameter. We then formulate the light field as a function that indexes rays to corresponding color values. We train a deep fully connected network to optimize this implicit function and memorize the 3D scene. Then, the scene-specific model is used to synthesize novel views. Different from previous light field approaches which require dense view sampling to reliably render novel views, our method can render novel views by sampling rays and querying the color for each ray from the network directly, thus enabling high-quality light field rendering with a sparser set of training images. Per-ray depth can be optionally predicted by the network, thus enabling applications such as auto refocus. Our novel view synthesis results are comparable to the state-of-the-arts, and even superior in some challenging scenes with refraction and reflection. We achieve this while maintaining an interactive frame rate and a small memory footprint.
  • Item
    A Learned Radiance-Field Representation for Complex Luminaires
    (The Eurographics Association, 2022) Condor, Jorge; Jarabo, Adrián; Ghosh, Abhijeet; Wei, Li-Yi
    We propose an efficient method for rendering complex luminaires using a high quality octree-based representation of the luminaire emission. Complex luminaires are a particularly challenging problem in rendering, due to their caustic light paths inside the luminaire. We reduce the geometric complexity of luminaires by using a simple proxy geometry, and encode the visuallycomplex emitted light field by using a neural radiance field. We tackle the multiple challenges of using NeRFs for representing luminaires, including their high dynamic range, high-frequency content and null-emission areas, by proposing a specialized loss function. For rendering, we distill our luminaires' NeRF into a plenoctree, which we can be easily integrated into traditional rendering systems. Our approach allows for speed-ups of up to 2 orders of magnitude in scenes containing complex luminaires introducing minimal error.
  • Item
    A Real-Time Adaptive Ray Marching Method for Particle-Based Fluid Surface Reconstruction
    (The Eurographics Association, 2022) Wu, Tong; Zhou, Zhiqiang; Wang, Anlan; Gong, Yuning; Zhang, Yanci; Ghosh, Abhijeet; Wei, Li-Yi
    In the rendering of particle-based fluids, the surfaces reconstructed by ray marching techniques contain more details than screen space filtering methods. However, the ray marching process is quite time-consuming because it needs a large number of steps for each ray. In this paper, we introduce an adaptive ray marching method to construct high-quality fluid surfaces in real-time. In order to reduce the number of ray marching steps, we propose a new data structure called binary density grid so that our ray marching method is capable of adaptively adjusting the step length. We also classify the fluid particles into two categories, i.e. high-density aggregations and low-density splashes. Based on this classification, two depth maps are generated to quickly provide the accurate start and approximated stop points of ray marching. In addition to reduce the number of marching steps, we also propose a method to adaptively determine the number of rays cast for different screen regions. And finally, in order to improve the quality of reconstructed surfaces, we present a method to adaptively blending the normal vectors computed from screen and object space. With the various adaptive optimizations mentioned above, our method can reconstruct high-quality fluid surfaces in real time.
  • Item
    Precomputed Discrete Visibility Fields for Real-Time Ray-Traced Environment Lighting
    (The Eurographics Association, 2022) Xu, Yang; Jiang, Yuanfa; Wang, Chenhao; Li, Kang; Zhou, Pengbo; Geng, Guohua; Ghosh, Abhijeet; Wei, Li-Yi
    Rendering environment lighting using ray tracing is challenging because many rays within the hemisphere are required to be traced. In this work, we propose discrete visibility fields (DVFs), which store visibility information in a uniform grid to speed up ray-traced low-frequency environment lighting for static scenes. In the precomputation stage, we compute and store the visibility and occlusion masks at the positions of the point samples of the scene using octahedral mapping. The visibility and occlusion masks of the point samples inside a grid cell are then merged by the logical OR operation. We also store the occlusion label indicating whether more than half of the pixels are occluded in the occlusion mask of each grid cell. At runtime, we exclude the rays occluded by the geometry or visible to the environment according to the information stored in the DVF. Results show that the proposed method can significantly speed up the rendering of ray-traced environment lighting and achieve real-time frame rates without sacrificing image quality. Compared to other environment lighting rendering methods based on precomputation, our method is free of tessellation or parameterization of the meshes, and the precomputation can be finished in a short time.
  • Item
    GPU-Driven Real-Time Mesh Contour Vectorization
    (The Eurographics Association, 2022) Jiang, Wangziwei; Li, Guiqing; Nie, Yongwei; Xian, Chuhua; Ghosh, Abhijeet; Wei, Li-Yi
    Rendering contours of 3D meshes has a wide range of applications. Previous CPU-based contour rendering algorithms support advanced stylized effects but cannot achieve realtime performance. On the other hand, real-time algorithms based on GPU have to sacrifice some advanced stylization effects due to the difficulty of linking contour elements into stroke curves. This paper proposes a GPU-based mesh contour rendering method which includes the following steps: (1) before rendering, a preprocessing step analyzes the adjacency and geometric information from the 3d mesh model; (2) at runtime, an extraction stage firstly selects contour edges from the 3D mesh model, then the parallelized Bresenham algorithm rasterizes the contour edges into a set of oriented contour pixels; (3) next, Potrace is parallelized to extract (pixel) edge loops from the contour pixels; (4) subsequently, a novel segmentation procedure is designed to partition the edge loops into strokes; (5) finally, these strokes are then converted into 2D strip meshes in order to support rendering with controllable styles. Except the preprocessing step, all other procedures are implemented in parallel on a GPU. This enables our framework to achieve real-time performance for high-resolution rendering of dense mesh models.
  • Item
    HedcutDrawings: Rendering Hedcut Style Portraits
    (The Eurographics Association, 2022) Pena-Pena, Karelia; Arce, Gonzalo R.; Ghosh, Abhijeet; Wei, Li-Yi
    Stippling illustrations of CEOs, authors, and world leaders have become an iconic style. Dot after dot is meticulously placed by professional artists to complete a hedcut, being an extremely time-consuming and painstaking task. The automatic generation of hedcuts by a computer is not simple since the understanding of the structure of faces and binary rendering of illustrations must be captured by an algorithm. Current challenges relate to the shape and placement of the dots without generating unwanted regularity artifacts. Recent neural style transfer techniques successfully separate the style from the content information of an image. However, such approach, as it is, is not suitable for stippling rendering since its output suffers from spillover artifacts and the placement of dots is arbitrary. The lack of aligned training data pairs also constraints the use of other deep-learning-based techniques. To address these challenges, we propose a new neural-based style transfer algorithm that uses side information to impose additional constraints on the direction of the dots. Experimental results show significant improvement in rendering hedcuts.
  • Item
    Spatiotemporal Blue Noise Masks
    (The Eurographics Association, 2022) Wolfe, Alan; Morrical, Nathan; Akenine-Möller, Tomas; Ramamoorthi, Ravi; Ghosh, Abhijeet; Wei, Li-Yi
    Blue noise error patterns are well suited to human perception, and when applied to stochastic rendering techniques, blue noise masks can minimize unwanted low-frequency noise in the final image. Current methods of applying different blue noise masks to each rendered frame result in either white noise frequency spectra temporally, and thus poor convergence and stability, or lower quality spatially. We propose novel blue noise masks that retain high quality blue noise spatially, yet when animated produce values at each pixel that are well distributed over time. To do so, we create scalar valued masks by modifying the energy function of the void and cluster algorithm. To create uniform and nonuniform vector valued masks, we make the same modifications to the blue-noise dithered sampling algorithm. These masks exhibit blue noise frequency spectra in both the spatial and temporal domains, resulting in visually pleasing error patterns, rapid convergence speeds, and increased stability when filtered temporally. Since masks can be initialized with arbitrary sample sets, these improvements can be used on a large variety of problems, both uniformly and importance sampled. We demonstrate these improvements in volumetric rendering, ambient occlusion, and stochastic convolution. By extending spatial blue noise to spatiotemporal blue noise, we overcome the convergence limitations of prior blue noise works, enabling new applications for blue noise distributions. Usable masks and source code can be found at https://github.com/NVIDIAGameWorks/SpatiotemporalBlueNoiseSDK.
  • Item
    GAN-based Defect Image Generation for Imbalanced Defect Classification of OLED panels
    (The Eurographics Association, 2022) Jeon, Yongmoon; Kim, Haneol; Lee, Hyeona; Jo, Seonghoon; Kim, Jaewon; Ghosh, Abhijeet; Wei, Li-Yi
    Image classification based on neural networks has been widely explored in machine learning and most research have focused on developing more efficient and accurate network models for given image dataset mostly over natural scene. However, industrial image data have different features with natural scene images in shape of target objects, background patterns, and color. Additionally, data imbalance is one of the most challenging problems to degrade classification accuracy for industrial images. This paper proposes a novel GAN-based image generation method to improve classification accuracy for defect images of OLED panels. We validate our method can synthetically generate defect images of OLED panels and classification accuracy can be improved by training minor classes with the generated defect images.
  • Item
    Generalized Decoupled and Object Space Shading System
    (The Eurographics Association, 2022) Baker, Daniel; Jarzynski, Mark; Ghosh, Abhijeet; Wei, Li-Yi
    We present a generalized decoupled and object space shading system. This system includes a new layer material system, a dynamic hierarchical sparse shade space allocation system, a GPU shade work dispatcher, and a multi-frame shade work distribution system. Together, these new systems create a generalized solution to decoupled shading, solving both the visibility problem, where shading samples which are not visible are shaded; the overshading and under shading problem, where parts of the scene are shaded at higher or lower number of samples than needed; and the shade allocation problem, where shade samples must be efficiently stored in GPU memory. The generalized decoupled shading system introduced shades and stores only the samples which are actually needed for the rendering a frame, with minimal overshading and no undershading. It does so with minimal overhead, and overall performance is competitive with other rendering techniques on a modern GPU.
  • Item
    Multi-Fragment Rendering for Glossy Bounces on the GPU
    (The Eurographics Association, 2022) Yoshimura, Atsushi; Tokuyoshi, Yusuke; Harada, Takahiro; Ghosh, Abhijeet; Wei, Li-Yi
    Multi-fragment rendering provides additional degrees of freedom in postprocessing. It allows us to edit images rendered with antialiasing, motion blur, depth of field, and transparency. To store multiple fragments, relationships between pixels and scene elements are often encoded into an existing image format. Most multi-fragment rendering systems, however, take into account only directly visible fragments on primary rays. The pixel coverage of indirectly visible fragments on reflected or refracted rays has not been well discussed. In this paper, we extend the generation of multiple fragments to support the indirect visibility in multiple bounces, which is often required by artists for image manipulation in productions. Our method is compatible with an existing multi-fragment image format such as Cryptomatte, and does not need any additional ray traversals during path tracing.