GSEditPro: 3D Gaussian Splatting Editing with Attention-based Progressive Localization
dc.contributor.author | Sun, Yanhao | en_US |
dc.contributor.author | Tian, Runze | en_US |
dc.contributor.author | Han, Xiao | en_US |
dc.contributor.author | Liu, Xinyao | en_US |
dc.contributor.author | Zhang, Yan | en_US |
dc.contributor.author | Xu, Kai | en_US |
dc.contributor.editor | Chen, Renjie | en_US |
dc.contributor.editor | Ritschel, Tobias | en_US |
dc.contributor.editor | Whiting, Emily | en_US |
dc.date.accessioned | 2024-10-13T18:07:47Z | |
dc.date.available | 2024-10-13T18:07:47Z | |
dc.date.issued | 2024 | |
dc.description.abstract | With the emergence of large-scale Text-to-Image(T2I) models and implicit 3D representations like Neural Radiance Fields (NeRF), many text-driven generative editing methods based on NeRF have appeared. However, the implicit encoding of geometric and textural information poses challenges in accurately locating and controlling objects during editing. Recently, significant advancements have been made in the editing methods of 3D Gaussian Splatting, a real-time rendering technology that relies on explicit representation. However, these methods still suffer from issues including inaccurate localization and limited manipulation over editing. To tackle these challenges, we propose GSEditPro, a novel 3D scene editing framework which allows users to perform various creative and precise editing using text prompts only. Leveraging the explicit nature of the 3D Gaussian distribution, we introduce an attention-based progressive localization module to add semantic labels to each Gaussian during rendering. This enables precise localization on editing areas by classifying Gaussians based on their relevance to the editing prompts derived from cross-attention layers of the T2I model. Furthermore, we present an innovative editing optimization method based on 3D Gaussian Splatting, obtaining stable and refined editing results through the guidance of Score Distillation Sampling and pseudo ground truth. We prove the efficacy of our method through extensive experiments. | en_US |
dc.description.number | 7 | |
dc.description.sectionheaders | 3D Reconstruction and Novel View Synthesis II | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 43 | |
dc.identifier.doi | 10.1111/cgf.15215 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 12 pages | |
dc.identifier.uri | https://doi.org/10.1111/cgf.15215 | |
dc.identifier.uri | https://diglib.eg.org/handle/10.1111/cgf15215 | |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Computing methodologies → Rendering; Point-based models; Computer vision representations | |
dc.subject | Computing methodologies → Rendering | |
dc.subject | Point | |
dc.subject | based models | |
dc.subject | Computer vision representations | |
dc.title | GSEditPro: 3D Gaussian Splatting Editing with Attention-based Progressive Localization | en_US |