40-Issue 4
Permanent URI for this collection
Browse
Browsing 40-Issue 4 by Issue Date
Now showing 1 - 15 of 15
Results Per Page
Sort Options
Item A Combined Scattering and Diffraction Model for Elliptical Hair Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2021) Benamira, Alexis; Pattanaik, Sumanta; Bousseau, Adrien and McGuire, MorganRealistic hair rendering relies on fiber scattering models. These models are based on either ray tracing or on full wavepropagation through the hair fiber. Ray tracing can model most of the scattering phenomenon observed but misses the important effect of diffraction. Indeed human natural hair specific dimensions and geometry demands for the wave nature of light to be taken into consideration for accurate rendering. However, current full-wave model requires nonpratical, several days precomputation, that needs to be repeated for every change in the hair geometry or color, for appropriate results. We present in this paper a dual hair scattering model which considers the dual aspect of light: as a wave and as a ray. Our model accurately simulates both diffraction and scattering phenomena without requiring any precomputation. Furthermore, it can simulate light transport in hairs of arbitrary elliptical cross-sections. This new dual approach enables our model to significantly improve the appearance of rendered hair and qualitatively match scattering and diffraction effects seen in photos of real hair while adding little computation overhead.Item Deep Portrait Lighting Enhancement with 3D Guidance(The Eurographics Association and John Wiley & Sons Ltd., 2021) Han, Fangzhou; Wang, Can; Du, Hao; Liao, Jing; Bousseau, Adrien and McGuire, MorganDespite recent breakthroughs in deep learning methods for image lighting enhancement, they are inferior when applied to portraits because 3D facial information is ignored in their models. To address this, we present a novel deep learning framework for portrait lighting enhancement based on 3D facial guidance. Our framework consists of two stages. In the first stage, corrected lighting parameters are predicted by a network from the input bad lighting image, with the assistance of a 3D morphable model and a differentiable renderer. Given the predicted lighting parameter, the differentiable renderer renders a face image with corrected shading and texture, which serves as the 3D guidance for learning image lighting enhancement in the second stage. To better exploit the long-range correlations between the input and the guidance, in the second stage, we design an imageto- image translation network with a novel transformer architecture, which automatically produces a lighting-enhanced result. Experimental results on the FFHQ dataset and in-the-wild images show that the proposed method outperforms state-of-the-art methods in terms of both quantitative metrics and visual quality.Item An Analytic BRDF for Materials with Spherical Lambertian Scatterers(The Eurographics Association and John Wiley & Sons Ltd., 2021) d'Eon, Eugene; Bousseau, Adrien and McGuire, MorganWe present a new analytic BRDF for porous materials comprised of spherical Lambertian scatterers. The BRDF has a single parameter: the albedo of the Lambertian particles. The resulting appearance exhibits strong back scattering and saturation effects that height-field-based models such as Oren-Nayar cannot reproduce.Item Q-NET: A Network for Low-dimensional Integrals of Neural Proxies(The Eurographics Association and John Wiley & Sons Ltd., 2021) Subr, Kartic; Bousseau, Adrien and McGuire, MorganIntegrals of multidimensional functions are often estimated by averaging function values at multiple locations. The use of an approximate surrogate or proxy for the true function is useful if repeated evaluations are necessary. A proxy is even more useful if its own integral is known analytically and can be calculated practically. We design a family of fixed networks, which we call Q-NETs, that can calculate integrals of functions represented by sigmoidal universal approximators. Q-NETs operate on the parameters of the trained proxy and can calculate exact integrals over any subset of dimensions of the input domain. Q-NETs also facilitate convenient recalculation of integrals without resampling the integrand or retraining the proxy, under certain transformations to the input space. We highlight the benefits of this scheme for diverse rendering applications including inverse rendering, sampled procedural noise and visualization. Q-NETs are appealing in the following contexts: the dimensionality is low (< 10D); integrals of a sampled function need to be recalculated over different sub-domains; the estimation of integrals needs to be decoupled from the sampling strategy such as when sparse, adaptive sampling is used; marginal functions need to be known in functional form; or when powerful Single Instruction Multiple Data/Thread (SIMD/SIMT) pipelines are available.Item Point-Based Neural Rendering with Per-View Optimization(The Eurographics Association and John Wiley & Sons Ltd., 2021) Kopanas, Georgios; Philip, Julien; Leimkühler, Thomas; Drettakis, George; Bousseau, Adrien and McGuire, MorganThere has recently been great interest in neural rendering methods. Some approaches use 3D geometry reconstructed with Multi-View Stereo (MVS) but cannot recover from the errors of this process, while others directly learn a volumetric neural representation, but suffer from expensive training and inference. We introduce a general approach that is initialized with MVS, but allows further optimization of scene properties in the space of input views, including depth and reprojected features, resulting in improved novel-view synthesis. A key element of our approach is our new differentiable point-based pipeline, based on bi-directional Elliptical Weighted Average splatting, a probabilistic depth test and effective camera selection. We use these elements together in our neural renderer, that outperforms all previous methods both in quality and speed in almost all scenes we tested. Our pipeline can be applied to multi-view harmonization and stylization in addition to novel-view synthesis.Item Optimised Path Space Regularisation(The Eurographics Association and John Wiley & Sons Ltd., 2021) Weier, Philippe; Droske, Marc; Hanika, Johannes; Weidlich, Andrea; Vorba, Jirí; Bousseau, Adrien and McGuire, MorganWe present Optimised Path Space Regularisation (OPSR), a novel regularisation technique for forward path tracing algorithms. Our regularisation controls the amount of roughness added to materials depending on the type of sampled paths and trades a small error in the estimator for a drastic reduction of variance in difficult paths, including indirectly visible caustics. We formulate the problem as a joint bias-variance minimisation problem and use differentiable rendering to optimise our model. The learnt parameters generalise to a large variety of scenes irrespective of their geometric complexity. The regularisation added to the underlying light transport algorithm naturally allows us to handle the problem of near-specular and glossy path chains robustly. Our method consistently improves the convergence of path tracing estimators, including state-of-the-art path guiding techniques where it enables finding otherwise hard-to-sample paths and thus, in turn, can significantly speed up the learning of guiding distributions.Item Deep Compositional Denoising for High-quality Monte Carlo Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2021) Zhang, Xianyao; Manzi, Marco; Vogels, Thijs; Dahlberg, Henrik; Gross, Markus; Papas, Marios; Bousseau, Adrien and McGuire, MorganWe propose a deep-learning method for automatically decomposing noisy Monte Carlo renderings into components that kernelpredicting denoisers can denoise more effectively. In our model, a neural decomposition module learns to predict noisy components and corresponding feature maps, which are consecutively reconstructed by a denoising module. The components are predicted based on statistics aggregated at the pixel level by the renderer. Denoising these components individually allows the use of per-component kernels that adapt to each component's noisy signal characteristics. Experimentally, we show that the proposed decomposition module consistently improves the denoising quality of current state-of-the-art kernel-predicting denoisers on large-scale academic and production datasets.Item Real-time Monte Carlo Denoising with Weight Sharing Kernel Prediction Network(The Eurographics Association and John Wiley & Sons Ltd., 2021) Fan, Hangming; Wang, Rui; Huo, Yuchi; Bao, Hujun; Bousseau, Adrien and McGuire, MorganReal-time Monte Carlo denoising aims at removing severe noise under low samples per pixel (spp) in a strict time budget. Recently, kernel-prediction methods use a neural network to predict each pixel's filtering kernel and have shown a great potential to remove Monte Carlo noise. However, the heavy computation overhead blocks these methods from real-time applications. This paper expands the kernel-prediction method and proposes a novel approach to denoise very low spp (e.g., 1-spp) Monte Carlo path traced images at real-time frame rates. Instead of using the neural network to directly predict the kernel map, i.e., the complete weights of each per-pixel filtering kernel, we predict an encoding of the kernel map, followed by a high-efficiency decoder with unfolding operations for a high-quality reconstruction of the filtering kernels . The kernel map encoding yields a compact single-channel representation of the kernel map, which can significantly reduce the kernel-prediction network's throughput. In addition, we adopt a scalable kernel fusion module to improve denoising quality. The proposed approach preserves kernel prediction methods' denoising quality while roughly halving its denoising time for 1-spp noisy inputs. In addition, compared with the recent neural bilateral grid-based real-time denoiser, our approach benefits from the high parallelism of kernel-based reconstruction and produces better denoising results at equal time.Item Rendering 2021 CGF 40-4: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2021) Bousseau, Adrien; McGuire, Morgan; Bousseau, Adrien and McGuire, MorganItem Moving Basis Decomposition for Precomputed Light Transport(The Eurographics Association and John Wiley & Sons Ltd., 2021) Silvennoinen, Ari; Sloan, Peter-Pike; Bousseau, Adrien and McGuire, MorganWe study the problem of efficient representation of potentially high-dimensional, spatially coherent signals in the context of precomputed light transport. We present a basis decomposition framework, Moving Basis Decomposition (MBD), that generalizes many existing basis expansion methods and enables high-performance, seamless reconstruction of compressed data. We develop an algorithm for solving large-scale MBD problems. We evaluate MBD against state-of-the-art in a series of controlled experiments and describe a real-world application, where MBD serves as the backbone of a scalable global illumination system powering multiple, current and upcoming 60Hz AAA-titles running on a wide range of hardware platforms.Item Unified Shape and SVBRDF Recovery using Differentiable Monte Carlo Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2021) Luan, Fujun; Zhao, Shuang; Bala, Kavita; Dong, Zhao; Bousseau, Adrien and McGuire, MorganReconstructing the shape and appearance of real-world objects using measured 2D images has been a long-standing inverse rendering problem. In this paper, we introduce a new analysis-by-synthesis technique capable of producing high-quality reconstructions through robust coarse-to-fine optimization and physics-based differentiable rendering. Unlike most previous methods that handle geometry and reflectance largely separately, our method unifies the optimization of both by leveraging image gradients with respect to both object reflectance and geometry. To obtain physically accurate gradient estimates, we develop a new GPU-based Monte Carlo differentiable renderer leveraging recent advances in differentiable rendering theory to offer unbiased gradients while enjoying better performance than existing tools like PyTorch3D [RRN*20] and redner [LADL18]. To further improve robustness, we utilize several shape and material priors as well as a coarse-to-fine optimization strategy to reconstruct geometry. Using both synthetic and real input images, we demonstrate that our technique can produce reconstructions with higher quality than previous methods.Item Video-Based Rendering of Dynamic Stationary Environments from Unsynchronized Inputs(The Eurographics Association and John Wiley & Sons Ltd., 2021) Thonat, Theo; Aksoy, Yagiz; Aittala, Miika; Paris, Sylvain; Durand, Fredo; Drettakis, George; Bousseau, Adrien and McGuire, MorganImage-Based Rendering allows users to easily capture a scene using a single camera and then navigate freely with realistic results. However, the resulting renderings are completely static, and dynamic effects - such as fire, waterfalls or small waves - cannot be reproduced. We tackle the challenging problem of enabling free-viewpoint navigation including such stationary dynamic effects, but still maintaining the simplicity of casual capture. Using a single camera - instead of previous complex synchronized multi-camera setups - means that we have unsynchronized videos of the dynamic effect from multiple views, making it hard to blend them when synthesizing novel views. We present a solution that allows smooth free-viewpoint video-based rendering (VBR) of such scenes using temporal Laplacian pyramid decomposition video, enabling spatio-temporal blending. For effects such as fire and waterfalls, that are semi-transparent and occupy 3D space, we first estimate their spatial volume. This allows us to create per-video geometries and alpha-matte videos that we can blend using our frequency-dependent method. We also extend Laplacian blending to the temporal dimension to remove additional temporal seams. We show results on scenes containing fire, waterfalls or rippling waves at the seaside, bringing these scenes to life.Item DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks(The Eurographics Association and John Wiley & Sons Ltd., 2021) Neff, Thomas; Stadlbauer, Pascal; Parger, Mathias; Kurz, Andreas; Mueller, Joerg H.; Chaitanya, Chakravarty R. Alla; Kaplanyan, Anton S.; Steinberger, Markus; Bousseau, Adrien and McGuire, MorganThe recent research explosion around implicit neural representations, such as NeRF, shows that there is immense potential for implicitly storing high-quality scene and lighting information in compact neural networks. However, one major limitation preventing the use of NeRF in real-time rendering applications is the prohibitive computational cost of excessive network evaluations along each view ray, requiring dozens of petaFLOPS. In this work, we bring compact neural representations closer to practical rendering of synthetic content in real-time applications, such as games and virtual reality. We show that the number of samples required for each view ray can be significantly reduced when samples are placed around surfaces in the scene without compromising image quality. To this end, we propose a depth oracle network that predicts ray sample locations for each view ray with a single network evaluation. We show that using a classification network around logarithmically discretized and spherically warped depth values is essential to encode surface locations rather than directly estimating depth. The combination of these techniques leads to DONeRF, our compact dual network design with a depth oracle network as its first step and a locally sampled shading network for ray accumulation. With DONeRF, we reduce the inference costs by up to 48x compared to NeRF when conditioning on available ground truth depth information. Compared to concurrent acceleration methods for raymarching-based neural representations, DONeRF does not require additional memory for explicit caching or acceleration structures, and can render interactively (20 frames per second) on a single GPU.Item Rendering Point Clouds with Compute Shaders and Vertex Order Optimization(The Eurographics Association and John Wiley & Sons Ltd., 2021) Schütz, Markus; Kerbl, Bernhard; Wimmer, Michael; Bousseau, Adrien and McGuire, MorganIn this paper, we present several compute-based point cloud rendering approaches that outperform the hardware pipeline by up to an order of magnitude and achieve significantly better frame times than previous compute-based methods. Beyond basic closest-point rendering, we also introduce a fast, high-quality variant to reduce aliasing. We present and evaluate several variants of our proposed methods with different flavors of optimization, in order to ensure their applicability and achieve optimal performance on a range of platforms and architectures with varying support for novel GPU hardware features. During our experiments, the observed peak performance was reached rendering 796 million points (12.7GB) at rates of 62 to 64 frames per second (50 billion points per second, 802GB/s) on an RTX 3090 without the use of level-of-detail structures. We further introduce an optimized vertex order for point clouds to boost the efficiency of GL_POINTS by a factor of 5x in cases where hardware rendering is compulsory. We compare different orderings and show that Morton sorted buffers are faster for some viewpoints, while shuffled vertex buffers are faster in others. In contrast, combining both approaches by first sorting according to Morton-code and shuffling the resulting sequence in batches of 128 points leads to a vertex buffer layout with high rendering performance and low sensitivity to viewpoint changes.Item PosterChild: Blend-Aware Artistic Posterization(The Eurographics Association and John Wiley & Sons Ltd., 2021) Chao, Cheng-Kang; Singh, Karan; Gingold, Yotam; Bousseau, Adrien and McGuire, MorganPosterization is an artistic effect which converts continuous images into regions of constant color with smooth boundaries, often with an artistically recolored palette. Artistic posterization is extremely time-consuming and tedious. We introduce a blend-aware algorithm for generating posterized images with palette-based control for artistic recoloring. Our algorithm automatically extracts a palette and then uses multi-label optimization to find blended-color regions in terms of that palette. We smooth boundaries away from image details with frequency-guided median filtering. We evaluate our algorithm with a comparative user study and showcase its ability to produce compelling posterizations of a variety of inputs. Our parameters provide artistic control and enable cohesive, real-time recoloring after posterization pre-processing.