43-Issue 4
Permanent URI for this collection
Browse
Browsing 43-Issue 4 by Title
Now showing 1 - 17 of 17
Results Per Page
Sort Options
Item Bridge Sampling for Connections via Multiple Scattering Events(The Eurographics Association and John Wiley & Sons Ltd., 2024) Schüßler, Vincent; Hanika, Johannes; Dachsbacher, Carsten; Garces, Elena; Haines, EricExplicit sampling of and connecting to light sources is often essential for reducing variance in Monte Carlo rendering. In dense, forward-scattering participating media, its benefit declines, as significant transport happens over longer multiple-scattering paths around the straight connection to the light. Sampling these paths is challenging, as their contribution is shaped by the product of reciprocal squared distance terms and the phase functions. Previous work demonstrates that sampling several of these terms jointly is crucial. However, these methods are tied to low-order scattering or struggle with highly-peaked phase functions. We present a method for sampling a bridge: a subpath of arbitrary vertex count connecting two vertices. Its probability density is proportional to all phase functions at inner vertices and reciprocal squared distance terms. To achieve this, we importance sample the phase functions first, and subsequently all distances at once. For the latter, we sample an independent, preliminary distance for each edge of the bridge, and afterwards scale the bridge such that it matches the connection distance. The scale factor can be marginalized out analytically to obtain the probability density of the bridge. This approach leads to a simple algorithm and can construct bridges of any vertex count. For the case of one or two inserted vertices, we also show an alternative without scaling or marginalization. For practical path sampling, we present a method to sample the number of bridge vertices whose distribution depends on the connection distance, the phase function, and the collision coefficient. While our importance sampling treats media as homogeneous we demonstrate its effectiveness on heterogeneous media.Item A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis(The Eurographics Association and John Wiley & Sons Ltd., 2024) Poirier-Ginter, Yohan; Gauthier, Alban; Philip, Julien; Lalonde, Jean-François; Drettakis, George; Garces, Elena; Haines, EricRelighting radiance fields is severely underconstrained for multi-view data, which is most often captured under a single illumination condition; It is especially hard for full scenes containing multiple objects. We introduce a method to create relightable radiance fields using such single-illumination data by exploiting priors extracted from 2D image diffusion models. We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction, allowing us to augment a single-illumination capture into a realistic - but possibly inconsistent - multi-illumination dataset from directly defined light directions. We use this augmented data to create a relightable radiance field represented by 3D Gaussian splats. To allow direct control of light direction for low-frequency lighting, we represent appearance with a multi-layer perceptron parameterized on light direction. To enforce multi-view consistency and overcome inaccuracies we optimize a per-image auxiliary feature vector. We show results on synthetic and real multi-view data under single illumination, demonstrating that our method successfully exploits 2D diffusion model priors to allow realistic 3D relighting for complete scenes.Item Learning to Rasterize Differentiably(The Eurographics Association and John Wiley & Sons Ltd., 2024) Wu, Chenghao; Mailee, Hamila; Montazeri, Zahra; Ritschel, Tobias; Garces, Elena; Haines, EricDifferentiable rasterization changes the standard formulation of primitive rasterization -by enabling gradient flow from a pixel to its underlying triangles- using distribution functions in different stages of rendering, creating a ''soft'' version of the original rasterizer. However, choosing the optimal softening function that ensures the best performance and convergence to a desired goal requires trial and error. Previous work has analyzed and compared several combinations of softening. In this work, we take it a step further and, instead of making a combinatorial choice of softening operations, parameterize the continuous space of common softening operations. We study meta-learning tunable softness functions over a set of inverse rendering tasks (2D and 3D shape, pose and occlusion) so it generalizes to new and unseen differentiable rendering tasks with optimal softness.Item Lossless Basis Expansion for Gradient-Domain Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2024) Fang, Qiqin; Hachisuka, Toshiya; Garces, Elena; Haines, EricGradient-domain rendering utilizes difference estimates with shift mapping to reduce variance in Monte Carlo rendering. Such difference estimates are effective under the assumption that pixels for difference estimates have similar integrands. This assumption is often violated because it is common to have spatially varying BSDFs with material maps, which potentially result in a very different integrand per pixel. We introduce an extension of gradient-domain rendering that effectively supports such per-pixel variation in BSDFs based on basis expansion. Basis expansion for BSDFs has been used extensively in other problems in rendering, where the goal is to approximate a given BSDF by a weighted sum of predefined basis functions. We instead utilize lossless basis expansion, representing a BSDF without any approximation by adding the remaining difference in the original basis expansion. This lossless basis expansion allows us to cancel more terms via shift mapping, resulting in low variance difference estimates even with per-pixel BSDF variation. We also extend the Poisson reconstruction process to support this basis expansion. Regular gradient-domain rendering can be expressed as a special case of our extension, where the basis is simply the BSDF per pixel (i.e., no basis expansion). We provide proof-of-concept experiments and showcase the effectiveness of our method for scenes with highly varying material maps. Our results show noticeable improvement over regular gradient-domain rendering under both L1 and L2 reconstructions. The resulting formulation via basis expansion essentially serves as a new way of path reuse among pixels in the presence of per-pixel variation.Item MatUp: Repurposing Image Upsamplers for SVBRDFs(The Eurographics Association and John Wiley & Sons Ltd., 2024) Gauthier, Alban; Kerbl, Bernhard; Levallois, Jérémy; Faury, Robin; Thiery, Jean-Marc; Boubekeur, Tamy; Garces, Elena; Haines, EricWe propose MATUP, an upsampling filter for material super-resolution. Our method takes as input a low-resolution SVBRDF and upscales its maps so that their rendering under various lighting conditions fits upsampled renderings inferred in the radiance domain with pre-trained RGB upsamplers. We formulate our local filter as a compact Multilayer Perceptron (MLP), which acts on a small window of the input SVBRDF and is optimized using a data-fitting loss defined over upsampled radiance at various locations. This optimization is entirely performed at the scale of a single, independent material. Doing so, MATUP leverages the reconstruction capabilities acquired over large collections of natural images by pre-trained RGB models and provides regularization over self-similar structures. In particular, our light-weight neural filter avoids retraining complex architectures from scratch or accessing any large collection of low/high resolution material pairs - which do not actually exist at the scale RGB upsamplers are trained with. As a result, MATUP provides fine and coherent details in the upscaled material maps, as shown in the extensive evaluation we provide.Item Neural Appearance Model for Cloth Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2024) Soh, Guan Yu; Montazeri, Zahra; Garces, Elena; Haines, EricThe realistic rendering of woven and knitted fabrics has posed significant challenges throughout many years. Previously, fiberbased micro-appearance models have achieved considerable success in attaining high levels of realism. However, rendering such models remains complex due to the intricate internal scatterings of hundreds of fibers within a yarn, requiring vast amounts of memory and time to render. In this paper, we introduce a new framework to capture aggregated appearance by tracing many light paths through the underlying fiber geometry. We then employ lightweight neural networks to accurately model the aggregated BSDF, which allows for the precise modeling of a diverse array of materials while offering substantial improvements in speed and reductions in memory. Furthermore, we introduce a novel importance sampling scheme to further speed up the rate of convergence. We validate the efficacy and versatility of our framework through comparisons with preceding fiber-based shading models as well as the most recent yarn-based model.Item Neural Histogram-Based Glint Rendering of Surfaces With Spatially Varying Roughness(The Eurographics Association and John Wiley & Sons Ltd., 2024) Shah, Ishaan; Gamboa, Luis E.; Gruson, Adrien; Narayanan, P. J.; Garces, Elena; Haines, EricThe complex, glinty appearance of detailed normal-mapped surfaces at different scales requires expensive per-pixel Normal Distribution Function computations. Moreover, large light sources further compound this integration and increase the noise in the Monte Carlo renderer. Specialized rendering techniques that explicitly express the underlying normal distribution have been developed to improve performance for glinty surfaces controlled by a fixed material roughness. We present a new method that supports spatially varying roughness based on a neural histogram that computes per-pixel NDFs with arbitrary positions and sizes. Our representation is both memory and compute efficient. Additionally, we fully integrate direct illumination for all light directions in constant time. Our approach decouples roughness and normal distribution, allowing the live editing of the spatially varying roughness of complex normal-mapped objects. We demonstrate that our approach improves on previous work by achieving smaller footprints while offering GPU-friendly computation and compact representation.Item Neural SSS: Lightweight Object Appearance Representation(The Eurographics Association and John Wiley & Sons Ltd., 2024) Tg, Thomson; Tran, Duc Minh; Jensen, Henrik W.; Ramamoorthi, Ravi; Frisvad, Jeppe Revall; Garces, Elena; Haines, EricWe present a method for capturing the BSSRDF (bidirectional scattering-surface reflectance distribution function) of arbitrary geometry with a neural network. We demonstrate how a compact neural network can represent the full 8-dimensional light transport within an object including heterogeneous scattering. We develop an efficient rendering method using importance sampling that is able to render complex translucent objects under arbitrary lighting. Our method can also leverage the common planar half-space assumption, which allows it to represent one BSSRDF model that can be used across a variety of geometries. Our results demonstrate that we can render heterogeneous translucent objects under arbitrary lighting and obtain results that match the reference rendered using volumetric path tracing.Item Non-Orthogonal Reduction for Rendering Fluorescent Materials in Non-Spectral Engines(The Eurographics Association and John Wiley & Sons Ltd., 2024) Fichet, Alban; Belcour, Laurent; Barla, Pascal; Garces, Elena; Haines, EricWe propose a method to accurately handle fluorescence in a non-spectral (e.g., tristimulus) rendering engine, showcasing color-shifting and increased luminance effects. Core to our method is a principled reduction technique that encodes the reradiation into a low-dimensional matrix working in the space of the renderer's Color Matching Functions (CMFs). Our process is independent of a specific CMF set and allows for the addition of a non-visible ultraviolet band during light transport. Our representation visually matches full spectral light transport for measured fluorescent materials even for challenging illuminants.Item Patch Decomposition for Efficient Mesh Contours Extraction(The Eurographics Association and John Wiley & Sons Ltd., 2024) Tsiapkolis, Panagiotis; Bénard, Pierre; Garces, Elena; Haines, EricObject-space occluding contours of triangular meshes (a.k.a. mesh contours) are at the core of many methods in computer graphics and computational geometry. A number of hierarchical data-structures have been proposed to accelerate their computation on the CPU, but they do not map well to the GPU for real-time applications, such as video games. We show that a simple, flat data-structure composed of patches bounded by a normal cone and a bounding sphere may reach this goal, provided it is constructed to maximize the probability for a patch to be culled over all viewpoints. We derive a heuristic metric to efficiently estimate this probability, and present a greedy, bottom-up algorithm that constructs patches by grouping mesh edges according to this metric. In addition, we propose an effective way of computing their bounding sphere. We demonstrate through extensive experiments that this data-structure achieves similar performance as the state-of-the-art on the CPU but is also perfectly adapted to the GPU, leading to up to ×5 speedups.Item Practical Appearance Model for Foundation Cosmetics(The Eurographics Association and John Wiley & Sons Ltd., 2024) Lanza, Dario; Padrón-Griffe, Juan Raúl; Pranovich, Alina; Muñoz, Adolfo; Frisvad, Jeppe Revall; Jarabo, Adrian; Garces, Elena; Haines, EricCosmetic products have found their place in various aspects of human life, yet their digital appearance reproduction has received little attention. We present an appearance model for cosmetics, in particular for foundation layers, that reproduces a range of existing appearances of foundation cosmetics: from a glossy to a matte to an almost velvety look. Our model is a multilayered BSDF that reproduces the stacking of multiple layers of cosmetics. Inspired by the microscopic particulates used in cosmetics, we model each individual layer as a stochastic participating medium with two types of scatterers that mimic the most prominent visual features of cosmetics: spherical diffusers, resulting in a uniform distribution of radiance; and platelets, responsible for the glossy look of certain cosmetics.We implement our model on top of the position-free Monte Carlo framework, that allows us to include multiple scattering. We validate our model against measured reflectance data, and demonstrate the versatility and expressiveness of our model by thoroughly exploring the range of appearances that it can produce.Item Realistic Facial Age Transformation with 3D Uplifting(The Eurographics Association and John Wiley & Sons Ltd., 2024) Li, Xiaohui; Guarnera, Giuseppe Claudio; Lin, Arvin; Ghosh, Abhijeet; Garces, Elena; Haines, EricWhile current facial re-ageing methods can produce realistic results, they purely focus on the 2D age transformation. In this work, we present an approach to transform the age of a person in both facial appearance and shape across different ages while preserving their identity. We employ an α-(de)blending diffusion network with an age-to-α transformation to generate coarse structure changes, such as wrinkles. Additionally, we edit biophysical skin properties, including melanin and hemoglobin, to simulate skin color changes, producing realistic re-ageing results from ages 10 to 80 years. We also propose a geometric neural network that alters the coarse scale facial geometry according to age, followed by a lightweight and efficient network that adds appropriate skin displacement on top of the coarse geometry. Both qualitative and quantitative comparisons show that our method outperforms current state-of-the-art approaches.Item Rendering 2024 CGF 43-4: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2024) Garces, Elena; Haines, Eric; Garces, Elena; Haines, EricItem Residual Path Integrals for Re-rendering(The Eurographics Association and John Wiley & Sons Ltd., 2024) Xu, Bing; Li, Tzu-Mao; Georgiev, Iliyan; Hedstrom, Trevor; Ramamoorthi, Ravi; Garces, Elena; Haines, EricConventional rendering techniques are primarily designed and optimized for single-frame rendering. In practical applications, such as scene editing and animation rendering, users frequently encounter scenes where only a small portion is modified between consecutive frames. In this paper, we develop a novel approach to incremental re-rendering of scenes with dynamic objects, where only a small part of a scene moves from one frame to the next. We formulate the difference (or residual) in the image between two frames as a (correlated) light-transport integral which we call the residual path integral. Efficient numerical solution of this integral then involves (1) devising importance sampling strategies to focus on paths with non-zero residual-transport contributions and (2) choosing appropriate mappings between the native path spaces of the two frames. We introduce a set of path importance sampling strategies that trace from the moving object(s) which are the sources of residual energy. We explore path mapping strategies that generalize those from gradient-domain path tracing to our importance sampling techniques specially for dynamic scenes. Additionally, our formulation can be applied to material editing as a simpler special case. We demonstrate speed-ups over previous correlated sampling of path differences and over rendering the new frame independently. Our formulation brings new insights into the re-rendering problem and paves the way for devising new types of sampling techniques and path mappings with different trade-offs.Item Scaling Painting Style Transfer(The Eurographics Association and John Wiley & Sons Ltd., 2024) Galerne, Bruno; Raad, Lara; Lezama, José; Morel, Jean-Michel; Garces, Elena; Haines, EricNeural style transfer (NST) is a deep learning technique that produces an unprecedentedly rich style transfer from a style image to a content image. It is particularly impressive when it comes to transferring style from a painting to an image. NST was originally achieved by solving an optimization problem to match the global statistics of the style image while preserving the local geometric features of the content image. The two main drawbacks of this original approach is that it is computationally expensive and that the resolution of the output images is limited by high GPU memory requirements. Many solutions have been proposed to both accelerate NST and produce images with larger size. However, our investigation shows that these accelerated methods all compromise the quality of the produced images in the context of painting style transfer. Indeed, transferring the style of a painting is a complex task involving features at different scales, from the color palette and compositional style to the fine brushstrokes and texture of the canvas. This paper provides a solution to solve the original global optimization for ultra-high resolution (UHR) images, enabling multiscale NST at unprecedented image sizes. This is achieved by spatially localizing the computation of each forward and backward passes through the VGG network. Extensive qualitative and quantitative comparisons, as well as a perceptual study, show that our method produces style transfer of unmatched quality for such high-resolution painting styles. By a careful comparison, we show that state-of-the-art fast methods are still prone to artifacts, thus suggesting that fast painting style transfer remains an open problem.Item Stereo-consistent Screen Space Reflection(The Eurographics Association and John Wiley & Sons Ltd., 2024) Wu, XiaoLoong; Xu, Yanning; Wang, Lu; Garces, Elena; Haines, EricScreen Space Reflection (SSR) can reliably achieve highly efficient reflective effects, significantly enhancing users' sense of realism in real-time applications. However, when directly applied to stereo rendering, popular SSR algorithms lead to inconsistencies due to the differing information between the left and right eyes. This inconsistency, invisible to human vision, results in visual discomfort. This paper analyzes and demonstrates how screen-space geometries, fade boundaries, and reflection samples introduce inconsistent cues. Considering the complementary nature of screen information, we introduce a stereo-aware SSR method to alleviate visual discomfort caused by screen space disparities. By contrasting our stereo-aware SSR with conventional SSR and ray-traced results, we showcase the effectiveness of our approach in mitigating the inconsistencies stemming from screen space differences while introducing affordable performance overhead for real-time rendering.Item VMF Diffuse: A Unified Rough Diffuse BRDF(The Eurographics Association and John Wiley & Sons Ltd., 2024) d'Eon, Eugene; Weidlich, Andrea; Garces, Elena; Haines, EricWe present a practical analytic BRDF that approximates scattering from a generalized microfacet volume with a von Mises- Fischer NDF. Our BRDF seamlessly blends from smooth Lambertian, through moderately rough height fields with Beckmannlike statistics and into highly rough/porous behaviours that have been lacking from prior models. At maximum roughness, our model reduces to the recent Lambert-sphere BRDF. We validate our model by comparing to simulations of scattering from geometries with randomly-placed Lambertian spheres and show an improvement relative to a rough Beckmann BRDF with very high roughness.