Browsing by Author "Ramamoorthi, Ravi"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item Analysis of Sample Correlations for Monte Carlo Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2019) Singh, Gurprit; Öztireli, Cengiz; Ahmed, Abdalla G. M.; Coeurjolly, David; Subr, Kartic; Deussen, Oliver; Ostromoukhov, Victor; Ramamoorthi, Ravi; Jarosz, Wojciech; Giachetti, Andrea and Rushmeyer, HollyModern physically based rendering techniques critically depend on approximating integrals of high dimensional functions representing radiant light energy. Monte Carlo based integrators are the choice for complex scenes and effects. These integrators work by sampling the integrand at sample point locations. The distribution of these sample points determines convergence rates and noise in the final renderings. The characteristics of such distributions can be uniquely represented in terms of correlations of sampling point locations. Hence, it is essential to study these correlations to understand and adapt sample distributions for low error in integral approximation. In this work, we aim at providing a comprehensive and accessible overview of the techniques developed over the last decades to analyze such correlations, relate them to error in integrators, and understand when and how to use existing sampling algorithms for effective rendering workflows.Item Deep HDR Video from Sequences with Alternating Exposures(The Eurographics Association and John Wiley & Sons Ltd., 2019) Kalantari, Nima Khademi; Ramamoorthi, Ravi; Alliez, Pierre and Pellacini, FabioA practical way to generate a high dynamic range (HDR) video using off-the-shelf cameras is to capture a sequence with alternating exposures and reconstruct the missing content at each frame. Unfortunately, existing approaches are typically slow and are not able to handle challenging cases. In this paper, we propose a learning-based approach to address this difficult problem. To do this, we use two sequential convolutional neural networks (CNN) to model the entire HDR video reconstruction process. In the first step, we align the neighboring frames to the current frame by estimating the flows between them using a network, which is specifically designed for this application. We then combine the aligned and current images using another CNN to produce the final HDR frame. We perform an end-to-end training by minimizing the error between the reconstructed and ground truth HDR images on a set of training scenes. We produce our training data synthetically from existing HDR video datasets and simulate the imperfections of standard digital cameras using a simple approach. Experimental results demonstrate that our approach produces high-quality HDR videos and is an order of magnitude faster than the state-of-the-art techniques for sequences with two and three alternating exposures.Item Deep Kernel Density Estimation for Photon Mapping(The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhu, Shilin; Xu, Zexiang; Jensen, Henrik Wann; Su, Hao; Ramamoorthi, Ravi; Dachsbacher, Carsten and Pharr, MattRecently, deep learning-based denoising approaches have led to dramatic improvements in low sample-count Monte Carlo rendering. These approaches are aimed at path tracing, which is not ideal for simulating challenging light transport effects like caustics, where photon mapping is the method of choice. However, photon mapping requires very large numbers of traced photons to achieve high-quality reconstructions. In this paper, we develop the first deep learning-based method for particlebased rendering, and specifically focus on photon density estimation, the core of all particle-based methods. We train a novel deep neural network to predict a kernel function to aggregate photon contributions at shading points. Our network encodes individual photons into per-photon features, aggregates them in the neighborhood of a shading point to construct a photon local context vector, and infers a kernel function from the per-photon and photon local context features. This network is easy to incorporate in many previous photon mapping methods (by simply swapping the kernel density estimator) and can produce high-quality reconstructions of complex global illumination effects like caustics with an order of magnitude fewer photons compared to previous photon mapping methods. Our approach largely reduces the required number of photons, significantly advancing the computational efficiency in photon mapping.Item Human Hair Inverse Rendering using Multi-View Photometric data(The Eurographics Association, 2021) Sun, Tiancheng; Nam, Giljoo; Aliaga, Carlos; Hery, Christophe; Ramamoorthi, Ravi; Bousseau, Adrien and McGuire, MorganWe introduce a hair inverse rendering framework to reconstruct high-fidelity 3D geometry of human hair, as well as its reflectance, which can be readily used for photorealistic rendering of hair. We take multi-view photometric data as input, i.e., a set of images taken from various viewpoints and different lighting conditions. Our method consists of two stages. First, we propose a novel solution for line-based multi-view stereo that yields accurate hair geometry from multi-view photometric data. Specifically, a per-pixel lightcode is proposed to efficiently solve the hair correspondence matching problem. Our new solution enables accurate and dense strand reconstruction from a smaller number of cameras compared to the state-of-the-art work. In the second stage, we estimate hair reflectance properties using multi-view photometric data. A simplified BSDF model of hair strands is used for realistic appearance reproduction. Based on the 3D geometry of hair strands, we fit the longitudinal roughness and find the single strand color. We show that our method can faithfully reproduce the appearance of human hair and provide realism for digital humans. We demonstrate the accuracy and efficiency of our method using photorealistic synthetic hair rendering data.Item MesoGAN: Generative Neural Reflectance Shells(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Diolatzis, Stavros; Novak, Jan; Rousselle, Fabrice; Granskog, Jonathan; Aittala, Miika; Ramamoorthi, Ravi; Drettakis, George; Hauser, Helwig and Alliez, PierreWe introduce MesoGAN, a model for generative 3D neural textures. This new graphics primitive represents mesoscale appearance by combining the strengths of generative adversarial networks (StyleGAN) and volumetric neural field rendering. The primitive can be applied to surfaces as a neural reflectance shell; a thin volumetric layer above the surface with appearance parameters defined by a neural network. To construct the neural shell, we first generate a 2D feature texture using StyleGAN with carefully randomized Fourier features to support arbitrarily sized textures without repeating artefacts. We augment the 2D feature texture with a learned height feature, which aids the neural field renderer in producing volumetric parameters from the 2D texture. To facilitate filtering, and to enable end‐to‐end training within memory constraints of current hardware, we utilize a hierarchical texturing approach and train our model on multi‐scale synthetic datasets of 3D mesoscale structures. We propose one possible approach for conditioning MesoGAN on artistic parameters (e.g. fibre length, density of strands, lighting direction) and demonstrate and discuss integration into physically based renderers.Item NeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting(The Eurographics Association, 2021) Sun, Tiancheng; Lin, Kai-En; Bi, Sai; Xu, Zexiang; Ramamoorthi, Ravi; Bousseau, Adrien and McGuire, MorganHuman portraits exhibit various appearances when observed from different views under different lighting conditions. We can easily imagine how the face will look like in another setup, but computer algorithms still fail on this problem given limited observations. To this end, we present a system for portrait view synthesis and relighting: given multiple portraits, we use a neural network to predict the light-transport field in 3D space, and from the predicted Neural Light-transport Field (NeLF) produce a portrait from a new camera view under a new environmental lighting. Our system is trained on a large number of synthetic models, and can generalize to different synthetic and real portraits under various lighting conditions. Our method achieves simultaneous view synthesis and relighting given multi-view portraits as the input, and achieves state-of-the-art results.Item Neural Free-Viewpoint Relighting for Glossy Indirect Illumination(The Eurographics Association and John Wiley & Sons Ltd., 2023) Raghavan, Nithin; Xiao, Yan; Lin, Kai-En; Sun, Tiancheng; Bi, Sai; Xu, Zexiang; Li, Tzu-Mao; Ramamoorthi, Ravi; Ritschel, Tobias; Weidlich, AndreaPrecomputed Radiance Transfer (PRT) remains an attractive solution for real-time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real-time. However, practical PRT methods are usually limited to low-frequency spherical harmonic lighting. Allfrequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we learn the wavelet transport using a small multi-layer perceptron (MLP) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other MLP inputs. We optimize/learn the feature field (compactly represented by a tensor decomposition) and MLP parameters from multiple images of the scene under different lighting and viewing conditions. We demonstrate real-time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed rendering of challenging scenes involving view-dependent reflections and even caustics.Item PVP: Personalized Video Prior for Editable Dynamic Portraits using StyleGAN(The Eurographics Association and John Wiley & Sons Ltd., 2023) Lin, Kai-En; Trevithick, Alex; Cheng, Keli; Sarkis, Michel; Ghafoorian, Mohsen; Bi, Ning; Reitmayr, Gerhard; Ramamoorthi, Ravi; Ritschel, Tobias; Weidlich, AndreaPortrait synthesis creates realistic digital avatars which enable users to interact with others in a compelling way. Recent advances in StyleGAN and its extensions have shown promising results in synthesizing photorealistic and accurate reconstruction of human faces. However, previous methods often focus on frontal face synthesis and most methods are not able to handle large head rotations due to the training data distribution of StyleGAN. In this work, our goal is to take as input a monocular video of a face, and create an editable dynamic portrait able to handle extreme head poses. The user can create novel viewpoints, edit the appearance, and animate the face. Our method utilizes pivotal tuning inversion (PTI) to learn a personalized video prior from a monocular video sequence. Then we can input pose and expression coefficients to MLPs and manipulate the latent vectors to synthesize different viewpoints and expressions of the subject. We also propose novel loss functions to further disentangle pose and expression in the latent space. Our algorithm shows much better performance over previous approaches on monocular video datasets, and it is also capable of running in real-time at 54 FPS on an RTX 3080.Item Spatiotemporal Blue Noise Masks(The Eurographics Association, 2022) Wolfe, Alan; Morrical, Nathan; Akenine-Möller, Tomas; Ramamoorthi, Ravi; Ghosh, Abhijeet; Wei, Li-YiBlue noise error patterns are well suited to human perception, and when applied to stochastic rendering techniques, blue noise masks can minimize unwanted low-frequency noise in the final image. Current methods of applying different blue noise masks to each rendered frame result in either white noise frequency spectra temporally, and thus poor convergence and stability, or lower quality spatially. We propose novel blue noise masks that retain high quality blue noise spatially, yet when animated produce values at each pixel that are well distributed over time. To do so, we create scalar valued masks by modifying the energy function of the void and cluster algorithm. To create uniform and nonuniform vector valued masks, we make the same modifications to the blue-noise dithered sampling algorithm. These masks exhibit blue noise frequency spectra in both the spatial and temporal domains, resulting in visually pleasing error patterns, rapid convergence speeds, and increased stability when filtered temporally. Since masks can be initialized with arbitrary sample sets, these improvements can be used on a large variety of problems, both uniformly and importance sampled. We demonstrate these improvements in volumetric rendering, ambient occlusion, and stochastic convolution. By extending spatial blue noise to spatiotemporal blue noise, we overcome the convergence limitations of prior blue noise works, enabling new applications for blue noise distributions. Usable masks and source code can be found at https://github.com/NVIDIAGameWorks/SpatiotemporalBlueNoiseSDK.