Browsing by Author "Ghosh, Abhijeet"
Now showing 1 - 13 of 13
Results Per Page
Sort Options
Item Deep Shape and SVBRDF Estimation using Smartphone Multi-lens Imaging(The Eurographics Association and John Wiley & Sons Ltd., 2023) Fan, Chongrui; Lin, Yiming; Ghosh, Abhijeet; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.We present a deep neural network-based method that acquires high-quality shape and spatially varying reflectance of 3D objects using smartphone multi-lens imaging. Our method acquires two images simultaneously using a zoom lens and a wide angle lens of a smartphone under either natural illumination or phone flash conditions, effectively functioning like a single-shot method. Unlike traditional multi-view stereo methods which require sufficient differences in viewpoint and only estimate depth at a certain coarse scale, our method estimates fine-scale depth by utilising an optical-flow field extracted from subtle baseline and perspective due to different optics in the two images captured simultaneously. We further guide the SVBRDF estimation using the estimated depth, resulting in superior results compared to existing single-shot methods.Item Frontmatter: Pacific Graphics 2018(The Eurographics Association and John Wiley & Sons Ltd., 2018) Fu, Hongbo; Ghosh, Abhijeet; Kopf, Johannes; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesItem Frontmatter: Pacific Graphics 2018 - Short Papers and Posters(The Eurographics Association, 2018) Fu, Hongbo; Ghosh, Abhijeet; Kopf, Johannes; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesItem Neural BTF Compression and Interpolation(The Eurographics Association and John Wiley & Sons Ltd., 2019) Rainer, Gilles; Jakob, Wenzel; Ghosh, Abhijeet; Weyrich, Tim; Alliez, Pierre and Pellacini, FabioThe Bidirectional Texture Function (BTF) is a data-driven solution to render materials with complex appearance. A typical capture contains tens of thousands of images of a material sample under varying viewing and lighting conditions.While capable of faithfully recording complex light interactions in the material, the main drawback is the massive memory requirement, both for storing and rendering, making effective compression of BTF data a critical component in practical applications. Common compression schemes used in practice are based on matrix factorization techniques, which preserve the discrete format of the original dataset. While this approach generalizes well to different materials, rendering with the compressed dataset still relies on interpolating between the closest samples. Depending on the material and the angular resolution of the BTF, this can lead to blurring and ghosting artefacts. An alternative approach uses analytic model fitting to approximate the BTF data, using continuous functions that naturally interpolate well, but whose expressive range is often not wide enough to faithfully recreate materials with complex non-local lighting effects (subsurface scattering, inter-reflections, shadowing and masking...). In light of these observations, we propose a neural network-based BTF representation inspired by autoencoders: our encoder compresses each texel to a small set of latent coefficients, while our decoder additionally takes in a light and view direction and outputs a single RGB vector at a time. This allows us to continuously query reflectance values in the light and view hemispheres, eliminating the need for linear interpolation between discrete samples. We train our architecture on fabric BTFs with a challenging appearance and compare to standard PCA as a baseline. We achieve competitive compression ratios and high-quality interpolation/extrapolation without blurring or ghosting artifacts.Item Neural Shading Fields for Efficient Facial Inverse Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2023) Rainer, Gilles; Bridgeman, Lewis; Ghosh, Abhijeet; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Given a set of unstructured photographs of a subject under unknown lighting, 3D geometry reconstruction is relatively easy, but reflectance estimation remains a challenge. This is because it requires disentangling lighting from reflectance in the ambiguous observations. Solutions exist leveraging statistical, data-driven priors to output plausible reflectance maps even in the underconstrained single-view, unknown lighting setting. We propose a very low-cost inverse optimization method that does not rely on data-driven priors, to obtain high-quality diffuse and specular, albedo and normal maps in the setting of multi-view unknown lighting. We introduce compact neural networks that learn the shading of a given scene by efficiently finding correlations in the appearance across the face. We jointly optimize the implicit global illumination of the scene in the networks with explicit diffuse and specular reflectance maps that can subsequently be used for physically-based rendering. We analyze the veracity of results on ground truth data, and demonstrate that our reflectance maps maintain more detail and greater personal identity than state-of-the-art deep learning and differentiable rendering methods.Item On-Site Example-Based Material Appearance Acquisition(The Eurographics Association and John Wiley & Sons Ltd., 2019) Lin, Yiming; Peers, Pieter; Ghosh, Abhijeet; Boubekeur, Tamy and Sen, PradeepWe present a novel example-based material appearance modeling method suitable for rapid digital content creation. Our method only requires a single HDR photograph of a homogeneous isotropic dielectric exemplar object under known natural illumination. While conventional methods for appearance modeling require prior knowledge on the object shape, our method does not, nor does it recover the shape explicitly, greatly simplifying on-site appearance acquisition to a lightweight photography process suited for non-expert users. As our central contribution, we propose a shape-agnostic BRDF estimation procedure based on binary RGB profile matching.We also model the appearance of materials exhibiting a regular or stationary texture-like appearance, by synthesizing appropriate mesostructure from the same input HDR photograph and a mesostructure exemplar with (roughly) similar features. We believe our lightweight method for on-site shape-agnostic appearance acquisition presents a suitable alternative for a variety of applications that require plausible ''rapid-appearance-modeling''.Item Polarization-imaging Surface Reflectometry using Near-field Display(The Eurographics Association, 2022) Nogue, Emilie; Lin, Yiming; Ghosh, Abhijeet; Ghosh, Abhijeet; Wei, Li-YiWe present a practical method for measurement of spatially varying isotropic surface reflectance of planar samples using a combination of single-view polarization imaging and near-field display illumination. Unlike previous works that have required multiview imaging or more complex polarization measurements, our method requires only three linear polarizer measurements from a single viewpoint for estimating diffuse and specular albedo and spatially varying specular roughness. We obtain highquality estimate of the surface normal with two additional polarized measurements under a gradient illumination pattern. Our approach enables high-quality renderings of planar surfaces while reducing measurements to a near-optimal number for the estimated SVBRDF parameters.Item Practical Acquisition of Shape and Plausible Appearance of Reflective and Translucent Objects(The Eurographics Association and John Wiley & Sons Ltd., 2023) Lin, Arvin; Lin, Yiming; Ghosh, Abhijeet; Ritschel, Tobias; Weidlich, AndreaWe present a practical method for acquisition of shape and plausible appearance of reflective and translucent objects for realistic rendering and relighting applications. Such objects are extremely challenging to scan with existing capture setups, and have previously required complex lightstage hardware emitting continuous illumination. We instead employ a practical capture setup consisting of a set of desktop LCD screens to illuminate such objects with piece-wise continuous illumination for acquisition. We employ phase-shifted sinusoidal illumination for novel estimation of high quality photometric normals and transmission vector along with diffuse-specular separated reflectance/transmission maps for realistic relighting. We further employ neural in-painting to fill gaps in our measurements caused by gaps in screen illumination, and a novel NeuS-based neural rendering that combines these shape and reflectance maps acquired from multiple viewpoints for high-quality 3D surface geometry reconstruction along with plausible realistic rendering of complex light transport in such objects.Item Practical Measurement and Reconstruction of Spectral Skin Reflectance(The Eurographics Association and John Wiley & Sons Ltd., 2020) Gitlina, Yuliya; Guarnera, Giuseppe Claudio; Dhillon, Daljit Singh; Hansen, Jan; Lattas, Alexandros; Pai, Dinesh; Ghosh, Abhijeet; Dachsbacher, Carsten and Pharr, MattWe present two practical methods for measurement of spectral skin reflectance suited for live subjects, and drive a spectral BSSRDF model with appropriate complexity to match skin appearance in photographs, including human faces. Our primary measurement method employs illuminating a subject with two complementary uniform spectral illumination conditions using a multispectral LED sphere to estimate spatially varying parameters of chromophore concentrations including melanin and hemoglobin concentration, melanin blend-type fraction, and epidermal hemoglobin fraction. We demonstrate that our proposed complementary measurements enable higher-quality estimate of chromophores than those obtained using standard broadband illumination, while being suitable for integration with multiview facial capture using regular color cameras. Besides novel optimal measurements under controlled illumination, we also demonstrate how to adapt practical skin patch measurements using a hand-held dermatological skin measurement device, a Miravex Antera 3D camera, for skin appearance reconstruction and rendering. Furthermore, we introduce a novel approach for parameter estimation given the measurements using neural networks which is significantly faster than a lookup table search and avoids parameter quantization. We demonstrate high quality matches of skin appearance with photographs for a variety of skin types with our proposed practical measurement procedures, including photorealistic spectral reproduction and renderings of facial appearance.Item Rendering 2022 CGF 41-4: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2022) Ghosh, Abhijeet; Wei, Li-Yi; Ghosh, Abhijeet; Wei, Li-YiItem Rendering 2022 Symposium Track: Frontmatter(The Eurographics Association, 2022) Ghosh, Abhijeet; Wei, Li-Yi; Ghosh, Abhijeet; Wei, Li-YiItem Spectral Upsampling Approaches for RGB Illumination(The Eurographics Association, 2022) Guarnera, Giuseppe Claudio; Gitlina, Yuliya; Deschaintre, Valentin; Ghosh, Abhijeet; Ghosh, Abhijeet; Wei, Li-YiWe present two practical approaches for high fidelity spectral upsampling of previously recorded RGB illumination in the form of an image-based representation such as an RGB light probe. Unlike previous approaches that require multiple measurements with a spectrometer or a reference color chart under a target illumination environment, our method requires no additional information for the spectral upsampling step. Instead, we construct a data-driven basis of spectral distributions for incident illumination from a set of six RGBW LEDs (three narrowband and three broadband) that we employ to represent a given RGB color using a convex combination of the six basis spectra. We propose two different approaches for estimating the weights of the convex combination using – (a) genetic algorithm, and (b) neural networks. We additionally propose a theoretical basis consisting of a set of narrow and broad Gaussians as a generalization of the approach, and also evaluate an alternate LED basis for spectral upsampling. We achieve good qualitative matches of the predicted illumination spectrum using our spectral upsampling approach to ground truth illumination spectrum while achieving near perfect matching of the RGB color of the given illumination in the vast majority of cases. We demonstrate that the spectrally upsampled RGB illumination can be employed for various applications including improved lighting reproduction as well as more accurate spectral rendering.Item Unified Neural Encoding of BTFs(The Eurographics Association and John Wiley & Sons Ltd., 2020) Rainer, Gilles; Ghosh, Abhijeet; Jakob, Wenzel; Weyrich, Tim; Panozzo, Daniele and Assarsson, UlfRealistic rendering using discrete reflectance measurements is challenging, because arbitrary directions on the light and view hemispheres are queried at render time, incurring large memory requirements and the need for interpolation. This explains the desire for compact and continuously parametrized models akin to analytic BRDFs; however, fitting BRDF parameters to complex data such as BTF texels can prove challenging, as models tend to describe restricted function spaces that cannot encompass real-world behavior. Recent advances in this area have increasingly relied on neural representations that are trained to reproduce acquired reflectance data. The associated training process is extremely costly and must typically be repeated for each material. Inspired by autoencoders, we propose a unified network architecture that is trained on a variety of materials, and which projects reflectance measurements to a shared latent parameter space. Similarly to SVBRDF fitting, real-world materials are represented by parameter maps, and the decoder network is analog to the analytic BRDF expression (also parametrized on light and view directions for practical rendering application). With this approach, encoding and decoding materials becomes a simple matter of evaluating the network. We train and validate on BTF datasets of the University of Bonn, but there are no prerequisites on either the number of angular reflectance samples, or the sample positions. Additionally, we show that the latent space is well-behaved and can be sampled from, for applications such as mipmapping and texture synthesis.