Environment Maps Editing using Inverse Rendering and Adversarial Implicit Functions
dc.contributor.author | D'Orazio, Antonio | en_US |
dc.contributor.author | Sforza, Davide | en_US |
dc.contributor.author | Pellacini, Fabio | en_US |
dc.contributor.author | Masi, Iacopo | en_US |
dc.contributor.editor | Caputo, Ariel | en_US |
dc.contributor.editor | Garro, Valeria | en_US |
dc.contributor.editor | Giachetti, Andrea | en_US |
dc.contributor.editor | Castellani, Umberto | en_US |
dc.contributor.editor | Dulecha, Tinsae Gebrechristos | en_US |
dc.date.accessioned | 2024-11-11T12:48:13Z | |
dc.date.available | 2024-11-11T12:48:13Z | |
dc.date.issued | 2024 | |
dc.description.abstract | Editing High Dynamic Range (HDR) environment maps using an inverse differentiable rendering architecture is a complex inverse problem due to the sparsity of relevant pixels and the challenges in balancing light sources and background. The pixels illuminating the objects are a small fraction of the total image, leading to noise and convergence issues when the optimization directly involves pixel values. HDR images, with pixel values beyond the typical Standard Dynamic Range (SDR), pose additional challenges. Higher learning rates corrupt the background during optimization, while lower learning rates fail to manipulate light sources. Our work introduces a novel method for editing HDR environment maps using a differentiable rendering, addressing sparsity and variance between values. Instead of introducing strong priors that extract the relevant HDR pixels and separate the light sources, or using tricks such as optimizing the HDR image in the log space, we propose to model the optimized environment map with a new variant of implicit neural representations able to handle HDR images. The neural representation is trained with adversarial perturbations over the weights to ensure smooth changes in the output when it receives gradients from the inverse rendering. In this way, we obtain novel and cheap environment maps without relying on latent spaces of expensive generative models, maintaining the original visual consistency. Experimental results demonstrate the method's effectiveness in reconstructing the desired lighting effects while preserving the fidelity of the map and reflections on objects in the scene. Our approach can pave the way to interesting tasks, such as estimating a new environment map given a rendering with novel light sources, maintaining the initial perceptual features, and enabling brush stroke-based editing of existing environment maps. Our code is publicly available at github.com/OmnAI-Lab/R-SIREN. | en_US |
dc.description.sectionheaders | Rendering | |
dc.description.seriesinformation | Smart Tools and Applications in Graphics - Eurographics Italian Chapter Conference | |
dc.identifier.doi | 10.2312/stag.20241339 | |
dc.identifier.isbn | 978-3-03868-265-3 | |
dc.identifier.issn | 2617-4855 | |
dc.identifier.pages | 11 pages | |
dc.identifier.uri | https://doi.org/10.2312/stag.20241339 | |
dc.identifier.uri | https://diglib.eg.org/handle/10.2312/stag20241339 | |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Computing methodologies → Artificial intelligence; Computer graphics; Image manipulation | |
dc.subject | Computing methodologies → Artificial intelligence | |
dc.subject | Computer graphics | |
dc.subject | Image manipulation | |
dc.title | Environment Maps Editing using Inverse Rendering and Adversarial Implicit Functions | en_US |
Files
Original bundle
1 - 1 of 1