SCARF: Scalable Continual Learning Framework for Memory-efficiency Multiple Neural Radiance Fields

dc.contributor.authorWang, Yuzeen_US
dc.contributor.authorWang, Junyien_US
dc.contributor.authorWang, Chenen_US
dc.contributor.authorDuan, Wantongen_US
dc.contributor.authorBao, Yongtangen_US
dc.contributor.authorQi, Yueen_US
dc.contributor.editorChen, Renjieen_US
dc.contributor.editorRitschel, Tobiasen_US
dc.contributor.editorWhiting, Emilyen_US
dc.date.accessioned2024-10-13T18:09:46Z
dc.date.available2024-10-13T18:09:46Z
dc.date.issued2024
dc.description.abstractThis paper introduces a novel continual learning framework for synthesising novel views of multiple scenes, learning multiple 3D scenes incrementally, and updating the network parameters only with the training data of the upcoming new scene. We build on Neural Radiance Fields (NeRF), which uses multi-layer perceptron to model the density and radiance field of a scene as the implicit function. While NeRF and its extensions have shown a powerful capability of rendering photo-realistic novel views in a single 3D scene, managing these growing 3D NeRF assets efficiently is a new scientific problem. Very few works focus on the efficient representation or continuous learning capability of multiple scenes, which is crucial for the practical applications of NeRF. To achieve these goals, our key idea is to represent multiple scenes as the linear combination of a cross-scene weight matrix and a set of scene-specific weight matrices generated from a global parameter generator. Furthermore, we propose an uncertain surface knowledge distillation strategy to transfer the radiance field knowledge of previous scenes to the new model. Representing multiple 3D scenes with such weight matrices significantly reduces memory requirements. At the same time, the uncertain surface distillation strategy greatly overcomes the catastrophic forgetting problem and maintains the photo-realistic rendering quality of previous scenes. Experiments show that the proposed approach achieves state-of-the-art rendering quality of continual learning NeRF on NeRF-Synthetic, LLFF, and TanksAndTemples datasets while preserving extra low storage cost.en_US
dc.description.number7
dc.description.sectionheadersNeural Radiance Fields and Gaussian Splatting
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume43
dc.identifier.doi10.1111/cgf.15255
dc.identifier.issn1467-8659
dc.identifier.pages12 pages
dc.identifier.urihttps://doi.org/10.1111/cgf.15255
dc.identifier.urihttps://diglib.eg.org/handle/10.1111/cgf15255
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Computing methodologies → Rendering; Machine learning; Computer vision
dc.subjectComputing methodologies → Rendering
dc.subjectMachine learning
dc.subjectComputer vision
dc.titleSCARF: Scalable Continual Learning Framework for Memory-efficiency Multiple Neural Radiance Fieldsen_US
Files
Original bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
cgf15255.pdf
Size:
16.7 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
paper1001_mm.pdf
Size:
1.74 MB
Format:
Adobe Portable Document Format
Collections