Rendering - Experimental Ideas & Implementations
Permanent URI for this community
Browse
Browsing Rendering - Experimental Ideas & Implementations by Subject "based rendering"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Fast Polygonal Splatting using Directional Kernel Difference(The Eurographics Association, 2021) Moroto, Yuji; Hachisuka, Toshiya; Umetani, Nobuyuki; Bousseau, Adrien and McGuire, MorganDepth-of-field (DoF) filtering is an important image-processing task for producing blurred images similar to those obtained with a large aperture camera lens. DoF filtering applies an image convolution with a spatially varying kernel and is thus computationally expensive, even on modern computational hardware. In this paper, we introduce an approach for fast and accurate DoF filtering for polygonal kernels, where the value is constant inside the kernel. Our approach is an extension of the existing approach based on discrete differenced kernels. The performance gain here hinges upon the fact that kernels typically become sparse (i.e., mostly zero) when taking the difference. We extended the existing approach to conventional axis-aligned differences to non-axis-aligned differences. The key insight is that taking such differences along the directions of the edges makes polygonal kernels significantly sparser than just taking the difference along the axis-aligned directions, as in existing studies. Compared to a naive image convolution, we achieve an order of magnitude speedup, allowing a real-time application of polygonal kernels even on high-resolution images.Item High Quality Neural Relighting using Practical Zonal Illumination(The Eurographics Association, 2024) Lin, Arvin; Lin, Yiming; Li, Xiaohui; Ghosh, Abhijeet; Haines, Eric; Garces, ElenaWe present a method for high-quality image-based relighting using a practical limited zonal illumination field. Our setup can be implemented with commodity components with no dedicated hardware. We employ a set of desktop monitors to illuminate a subject from a near-hemispherical zone and record One-Light-At-A-Time (OLAT) images from multiple viewpoints. We further extrapolate sampling of incident illumination directions beyond the frontal coverage of the monitors by repeating OLAT captures with the subject rotation in relation to the capture setup. Finally, we train our proposed skip-assisted autoencoder and latent diffusion based generative method to learn a high-quality continuous representation of the reflectance function without requiring explicit alignment of the data captured from various viewpoints. This method enables smooth lighting animation for high-frequency reflectance functions and effectively manages to extend incident lighting beyond the practical capture setup's illumination zone. Compared to state-of-the-art methods, our approach achieves superior image-based relighting results, capturing finer skin pore details and extending to passive performance video relighting.Item Learning Self-Shadowing for Clothed Human Bodies(The Eurographics Association, 2024) Einabadi, Farshad; Guillemaut, Jean-Yves; Hilton, Adrian; Haines, Eric; Garces, ElenaThis paper proposes to learn self-shadowing on full-body, clothed human postures from monocular colour image input, by supervising a deep neural model. The proposed approach implicitly learns the articulated body shape in order to generate self-shadow maps without seeking to reconstruct explicitly or estimate parametric 3D body geometry. Furthermore, it is generalisable to different people without per-subject pre-training, and has fast inference timings. The proposed neural model is trained on self-shadow maps rendered from 3D scans of real people for various light directions. Inference of shadow maps for a given illumination is performed from only 2D image input. Quantitative and qualitative experiments demonstrate comparable results to the state of the art whilst being monocular and achieving a considerably faster inference time. We provide ablations of our methodology and further show how the inferred self-shadow maps can benefit monocular full-body human relighting.Item NeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting(The Eurographics Association, 2021) Sun, Tiancheng; Lin, Kai-En; Bi, Sai; Xu, Zexiang; Ramamoorthi, Ravi; Bousseau, Adrien and McGuire, MorganHuman portraits exhibit various appearances when observed from different views under different lighting conditions. We can easily imagine how the face will look like in another setup, but computer algorithms still fail on this problem given limited observations. To this end, we present a system for portrait view synthesis and relighting: given multiple portraits, we use a neural network to predict the light-transport field in 3D space, and from the predicted Neural Light-transport Field (NeLF) produce a portrait from a new camera view under a new environmental lighting. Our system is trained on a large number of synthetic models, and can generalize to different synthetic and real portraits under various lighting conditions. Our method achieves simultaneous view synthesis and relighting given multi-view portraits as the input, and achieves state-of-the-art results.Item Sampling Clear Sky Models using Truncated Gaussian Mixtures(The Eurographics Association, 2021) Vitsas, Nick; Vardis, Konstantinos; Papaioannou, Georgios; Bousseau, Adrien and McGuire, MorganParametric clear sky models are often represented by simple analytic expressions that can efficiently generate plausible, natural radiance maps of the sky, taking into account expensive and hard to simulate atmospheric phenomena. In this work, we show how such models can be complemented by an equally simple, elegant and generic analytic continuous probability density function (PDF) that provides a very good approximation to the radiance-based distribution of the sky. We describe a fitting process that is used to properly parameterise a truncated Gaussian mixture model, which allows for exact, constant-time and minimal-memory sampling and evaluation of this PDF, without rejection sampling, an important property for practical applications in offline and real-time rendering. We present experiments in a standard importance sampling framework that showcase variance reduction approaching that of a more expensive inversion sampling method using Summed Area Tables.Item Single-image Full-body Human Relighting(The Eurographics Association, 2021) Lagunas, Manuel; Sun, Xin; Yang, Jimei; Villegas, Ruben; Zhang, Jianming; Shu, Zhixin; Masia, Belen; Gutierrez, Diego; Bousseau, Adrien and McGuire, MorganWe present a single-image data-driven method to automatically relight images with full-body humans in them. Our framework is based on a realistic scene decomposition leveraging precomputed radiance transfer (PRT) and spherical harmonics (SH) lighting. In contrast to previous work, we lift the assumptions on Lambertian materials and explicitly model diffuse and specular reflectance in our data. Moreover, we introduce an additional light-dependent residual term that accounts for errors in the PRTbased image reconstruction. We propose a new deep learning architecture, tailored to the decomposition performed in PRT, that is trained using a combination of L1, logarithmic, and rendering losses. Our model outperforms the state of the art for full-body human relighting both with synthetic images and photographs.