Browsing by Author "Wang, Beibei"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Efficient Caustics Rendering via Spatial and Temporal Path Reuse(The Eurographics Association and John Wiley & Sons Ltd., 2023) Xu, Xiaofeng; Wang, Lu; Wang, Beibei; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Caustics are complex optical effects caused by the light being concentrated in a small area due to reflection or refraction on surfaces with low roughness, typically under a sharp light source. Rendering caustic effects is challenging for Monte Carlobased approaches, due to the difficulties of sampling the specular paths. One effective solution is using the specular manifold to locate these valid specular paths. Unfortunately, it needs many iterations to find these paths, leading to a long rendering time. To address this issue, our key insight is that the specular paths tend to be similar for neighboring shading points. To this end, we propose to reuse the specular paths spatially. More specifically, we generate some specular path samples with a low sample rate and then reuse these specular path samples as the initialization for specular manifold walk among neighboring shading points. In this way, much fewer specular path-searching iterations are performed, due to the efficient initialization close to the final solution. Furthermore, this reuse strategy can be extended for dynamic scenes in a temporal manner, such as light moving or specular geometry deformation. Our method outperforms current state-of-the-art methods and can handle multiple bounces of light and various scenes.Item Fast Global Illumination with Discrete Stochastic Microfacets Using a Filterable Model(The Eurographics Association and John Wiley & Sons Ltd., 2018) Wang, Beibei; Wang, Lu; Holzschuch, Nicolas; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesMany real-life materials have a sparkling appearance, whether by design or by nature. Examples include metallic paints, sparkling varnish but also snow. These sparkles correspond to small, isolated, shiny particles reflecting light in a specific direction, on the surface or embedded inside the material. The particles responsible for these sparkles are usually small and discontinuous. These characteristics make it diffcult to integrate them effciently in a standard rendering pipeline, especially for indirect illumination. Existing approaches use a 4-dimensional hierarchy, searching for light-reflecting particles simultaneously in space and direction. The approach is accurate, but still expensive. In this paper, we show that this 4-dimensional search can be approximated using separate 2-dimensional steps. This approximation allows fast integration of glint contributions for large footprints, reducing the extra cost associated with glints be an order of magnitude.Item Joint SVBRDF Recovery and Synthesis From a Single Image using an Unsupervised Generative Adversarial Network(The Eurographics Association, 2020) Zhao, Yezi; Wang, Beibei; Xu, Yanning; Zeng, Zheng; Wang, Lu; Holzschuch, Nicolas; Dachsbacher, Carsten and Pharr, MattWe want to recreate spatially-varying bi-directional reflectance distribution functions (SVBRDFs) from a single image. Pro- ducing these SVBRDFs from single images will allow designers to incorporate many new materials in their virtual scenes, increasing their realism. A single image contains incomplete information about the SVBRDF, making reconstruction difficult. Existing algorithms can produce high-quality SVBRDFs with single or few input photographs using supervised deep learning. The learning step relies on a huge dataset with both input photographs and the ground truth SVBRDF maps. This is a weakness as ground truth maps are not easy to acquire. For practical use, it is also important to produce large SVBRDF maps. Existing algorithms rely on a separate texture synthesis step to generate these large maps, which leads to the loss of consistency be- tween generated SVBRDF maps. In this paper, we address both issues simultaneously. We present an unsupervised generative adversarial neural network that addresses both SVBRDF capture from a single image and synthesis at the same time. From a low-resolution input image, we generate a large resolution SVBRDF, much larger than the input images. We train a generative adversarial network (GAN) to get SVBRDF maps, which have both a large spatial extent and detailed texels. We employ a two-stream generator that divides the training of maps into two groups (normal and roughness as one, diffuse and specular as the other) to better optimize those four maps. In the end, our method is able to generate high-quality large scale SVBRDF maps from a single input photograph with repetitive structures and provides higher quality rendering results with more details compared to the previous works. Each input for our method requires individual training, which costs about 3 hours.Item Path‐based Monte Carlo Denoising Using a Three‐Scale Neural Network(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Lin, Weiheng; Wang, Beibei; Yang, Jian; Wang, Lu; Yan, Ling‐Qi; Benes, Bedrich and Hauser, HelwigMonte Carlo rendering is widely used in the movie industry. Since it is costly to produce noise‐free results directly, Monte Carlo denoising is often applied as a post‐process. Recently, deep learning methods have been successfully leveraged in Monte Carlo denoising. They are able to produce high quality denoised results, even with very low sample rate, e.g. 4 spp (sample per pixel). However, for difficult scene configurations, some details could be blurred in the denoised results. In this paper, we aim at preserving more details from inputs rendered with low spp. We propose a novel denoising pipeline that handles three‐scale features ‐ pixel, sample and path ‐ to preserve sharp details, uses an improved Res2Net feature extractor to reduce the network parameters and a smooth feature attention mechanism to remove low‐frequency splotches. As a result, our method achieves higher denoising quality and preserves better details than the previous methods.Item Real-Time Antialiased Area Lighting Using Multi-Scale Linearly Transformed Cosines(The Eurographics Association, 2021) Tao, Chengzhi; Guo, Jie; Gong, Chen; Wang, Beibei; Guo, Yanwen; Lee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, BurkhardWe present an anti-aliased real-time rendering method for local area lights based on Linearly Transformed Cosines (LTCs). It significantly reduces the aliasing artifacts of highlights reflected from area lights due to ignoring the meso-scale roughness (induced by normal maps). The proposed method separates the surface roughness into different scales and represents them all by LTCs. Then, spherical convolution is conducted between them to derive the overall normal distribution and the final Bidirectional Reflectance Distribution Function (BRDF). The overall surface roughness is further approximated by a polynomial function to guarantee high efficiency and avoid additional storage consumption. Experimental results show that our approach produces convincing results of multi-scale roughness across a range of viewing distances for local area lighting.Item Real-time Denoising Using BRDF Pre-integration Factorization(The Eurographics Association and John Wiley & Sons Ltd., 2021) Zhuang, Tao; Shen, Pengfei; Wang, Beibei; Liu, Ligang; Zhang, Fang-Lue and Eisemann, Elmar and Singh, KaranPath tracing has been used for real-time renderings, thanks to the powerful GPU device. Unfortunately, path tracing produces noisy rendered results, thus, filtering or denoising is often applied as a post-process to remove the noise. Previous works produce high-quality denoised results, by accumulating the temporal samples. However, they cannot handle the details from bidirectional reflectance distribution function (BRDF) maps (e.g. roughness map). In this paper, we introduce the BRDF preintegration factorization for denoising to better preserve the details from BRDF maps. More specifically, we reformulate the rendering equation into two components: the BRDF pre-integration component and the weighted-lighting component. The BRDF pre-integration component is noise-free, since it does not depend on the lighting. Another key observation is that the weighted-lighting component tends to be smooth and low-frequency, which indicates that it is more suitable for denoising than the final rendered image. Hence, the weighted-lighting component is denoised individually. Our BRDF pre-integration demodulation approach is flexible for many real-time filtering methods. We have implemented it in spatio-temporal varianceguided filtering (SVGF), ReLAX and ReBLUR. Compared to the original methods, our method manages to better preserve the details from BRDF maps, while both the memory and time cost are negligible.Item Real‐Time Glints Rendering With Pre‐Filtered Discrete Stochastic Microfacets(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Wang, Beibei; Deng, Hong; Holzschuch, Nicolas; Benes, Bedrich and Hauser, HelwigMany real‐life materials have a sparkling appearance. Examples include metallic paints, sparkling fabrics and snow. Simulating these sparkles is important for realistic rendering but expensive. As sparkles come from small shiny particles reflecting light into a specific direction, they are very challenging for illumination simulation. Existing approaches use a four‐dimensional hierarchy, searching for light‐reflecting particles simultaneously in space and direction. The approach is accurate, but extremely expensive. A separable model is much faster, but still not suitable for real‐time applications. The performance problem is even worse when illumination comes from environment maps, as they require either a large sample count per pixel or pre‐filtering. Pre‐filtering is incompatible with the existing sparkle models, due to the discrete multi‐scale representation. In this paper, we present a GPU‐friendly, pre‐filtered model for real‐time simulation of sparkles and glints. Our method simulates glints under both environment maps and point light sources in real time, with an added cost of just 10 ms per frame with full high‐definition resolution. Editing material properties requires extra computations but is still real time, with an added cost of 10 ms per frame.Item SVBRDF Recovery from a Single Image with Highlights Using a Pre‐trained Generative Adversarial Network(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Wen, Tao; Wang, Beibei; Zhang, Lei; Guo, Jie; Holzschuch, Nicolas; Hauser, Helwig and Alliez, PierreSpatially varying bi‐directional reflectance distribution functions (SVBRDFs) are crucial for designers to incorporate new materials in virtual scenes, making them look more realistic. Reconstruction of SVBRDFs is a long‐standing problem. Existing methods either rely on an extensive acquisition system or require huge datasets, which are non‐trivial to acquire. We aim to recover SVBRDFs from a single image, without any datasets. A single image contains incomplete information about the SVBRDF, making the reconstruction task highly ill‐posed. It is also difficult to separate between the changes in colour that are caused by the material and those caused by the illumination, without the prior knowledge learned from the dataset. In this paper, we use an unsupervised generative adversarial neural network (GAN) to recover SVBRDFs maps with a single image as input. To better separate the effects due to illumination from the effects due to the material, we add the hypothesis that the material is stationary and introduce a new loss function based on Fourier coefficients to enforce this stationarity. For efficiency, we train the network in two stages: reusing a trained model to initialize the SVBRDFs and fine‐tune it based on the input image. Our method generates high‐quality SVBRDFs maps from a single input photograph, and provides more vivid rendering results compared to the previous work. The two‐stage training boosts runtime performance, making it eight times faster than the previous work.Item Unsupervised Image Reconstruction for Gradient-Domain Volumetric Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2020) Xu, Zilin; Sun, Qiang; Wang, Lu; Xu, Yanning; Wang, Beibei; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueGradient-domain rendering can highly improve the convergence of light transport simulation using the smoothness in image space. These methods generate image gradients and solve an image reconstruction problem with rendered image and the gradient images. Recently, a previous work proposed a gradient-domain volumetric photon density estimation for homogeneous participating media. However, the image reconstruction relies on traditional L1 reconstruction, which leads to obvious artifacts when only a few rendering passes are performed. Deep learning based reconstruction methods have been exploited for surface rendering, but they are not suitable for volume density estimation. In this paper, we propose an unsupervised neural network for image reconstruction of gradient-domain volumetric photon density estimation, more specifically for volumetric photon mapping, using a variant of GradNet with an encoded shift connection and a separated auxiliary feature branch, which includes volume based auxiliary features such as transmittance and photon density. Our network smooths the images on global scale and preserves the high frequency details on a small scale. We demonstrate that our network produces a higher quality result, compared to previous work. Although we only considered volumetric photon mapping, it's straightforward to extend our method for other forms, like beam radiance estimation.Item World-Space Spatiotemporal Path Resampling for Path Tracing(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhang, Hangyu; Wang, Beibei; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.With the advent of hardware-accelerated ray tracing, more and more real-time rendering applications tend to render images with ray-traced global illumination (GI). However, the low sample counts at real-time framerates bring enormous challenges to existing path sampling methods. Recent work (ReSTIR GI) samples indirect illumination effectively with a dramatic bias reduction. However, as a screen-space based path resampling approach, it can only reuse the path at the first bounce and brings subtle benefits for complex scenes. To this end, we propose a world-space based spatiotemporal path resampling approach. Our approach caches more path samples into a world-space grid, which allows reusing sub-path starting from non-primary path vertices. Furthermore, we introduce a practical normal-aware hash grid construction approach, providing more efficient candidate samples for path resampling. Eventually, our method achieves improvements ranging from 16.6% to 41.9% in terms of mean squared errors (MSE) compared against the previous method with only 4.4% ~ 8.4% extra time cost.