Temporally Stable Real-Time Joint Neural Denoising and Supersampling

dc.contributor.authorThomas, Manu Mathewen_US
dc.contributor.authorLiktor, Gaboren_US
dc.contributor.authorPeters, Christophen_US
dc.contributor.authorKim, Sungyeen_US
dc.contributor.authorVaidyanathan, Karthiken_US
dc.contributor.authorForbes, Angus G.en_US
dc.contributor.editorJosef Spjuten_US
dc.contributor.editorMarc Stammingeren_US
dc.contributor.editorVictor Zordanen_US
dc.date.accessioned2023-01-23T10:23:32Z
dc.date.available2023-01-23T10:23:32Z
dc.date.issued2022
dc.description.abstractRecent advances in ray tracing hardware bring real-time path tracing into reach, and ray traced soft shadows, glossy reflections, and diffuse global illumination are now common features in games. Nonetheless, ray budgets are still limited. This results in undersampling, which manifests as aliasing and noise. Prior work addresses these issues separately. While temporal supersampling methods based on neural networks have gained a wide use in modern games due to their better robustness, neural denoising remains challenging because of its higher computational cost. We introduce a novel neural network architecture for real-time rendering that combines supersampling and denoising, thus lowering the cost compared to two separate networks. This is achieved by sharing a single low-precision feature extractor with multiple higher-precision filter stages. To reduce cost further, our network takes low-resolution inputs and reconstructs a high-resolution denoised supersampled output. Our technique produces temporally stable high-fidelity results that significantly outperform state-of-the-art real-time statistical or analytical denoisers combined with TAA or neural upsampling to the target resolution. We introduce a novel neural network architecture for real-time rendering that combines supersampling and denoising, thus lowering the cost compared to two separate networks. This is achieved by sharing a single low-precision feature extractor with multiple higher-precision filter stages. To reduce cost further, our network takes low-resolution inputs and reconstructs a high-resolution denoised supersampled output. Our technique produces temporally stable high-fidelity results that significantly outperform state-of-the-art real-time statistical or analytical denoisers combined with TAA or neural upsampling to the target resolution.en_US
dc.description.number3
dc.description.sectionheadersSampling and Filtering
dc.description.seriesinformationProceedings of the ACM on Computer Graphics and Interactive Techniques
dc.description.volume5
dc.identifier.doi10.1145/3543870
dc.identifier.issn2577-6193
dc.identifier.urihttps://doi.org/10.1145/3543870
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1145/3543870
dc.publisherACM Association for Computing Machineryen_US
dc.subjectCCS Concepts: Computer systems organization -> Neural Network; Rendering Additional Key Words and Phrases: Kernel prediction, ray tracing, denoising, antialiasing, supersampling, super-resolution, real-time rendering, deep learning
dc.subjectComputer systems organization
dc.subjectNeural Network
dc.subjectRendering Additional Key Words and Phrases
dc.subjectKernel prediction
dc.subjectray tracing
dc.subjectdenoising
dc.subjectantialiasing
dc.subjectsupersampling
dc.subjectsuper
dc.subjectresolution
dc.subjectreal
dc.subjecttime rendering
dc.subjectdeep learning
dc.titleTemporally Stable Real-Time Joint Neural Denoising and Supersamplingen_US
Files