GANtlitz: Ultra High Resolution Generative Model for Multi-Modal Face Textures

dc.contributor.authorGruber, Aurelen_US
dc.contributor.authorCollins, Edoen_US
dc.contributor.authorMeka, Abhimitraen_US
dc.contributor.authorMueller, Franziskaen_US
dc.contributor.authorSarkar, Kripasindhuen_US
dc.contributor.authorOrts-Escolano, Sergioen_US
dc.contributor.authorPrasso, Lucaen_US
dc.contributor.authorBusch, Jayen_US
dc.contributor.authorGross, Markusen_US
dc.contributor.authorBeeler, Thaboen_US
dc.contributor.editorBermano, Amit H.en_US
dc.contributor.editorKalogerakis, Evangelosen_US
dc.date.accessioned2024-04-30T09:09:54Z
dc.date.available2024-04-30T09:09:54Z
dc.date.issued2024
dc.description.abstractHigh-resolution texture maps are essential to render photoreal digital humans for visual effects or to generate data for machine learning. The acquisition of high resolution assets at scale is cumbersome, it involves enrolling a large number of human subjects, using expensive multi-view camera setups, and significant manual artistic effort to align the textures. To alleviate these problems, we introduce GANtlitz (A play on the german noun Antlitz, meaning face), a generative model that can synthesize multi-modal ultra-high-resolution face appearance maps for novel identities. Our method solves three distinct challenges: 1) unavailability of a very large data corpus generally required for training generative models, 2) memory and computational limitations of training a GAN at ultra-high resolutions, and 3) consistency of appearance features such as skin color, pores and wrinkles in high-resolution textures across different modalities. We introduce dual-style blocks, an extension to the style blocks of the StyleGAN2 architecture, which improve multi-modal synthesis. Our patch-based architecture is trained only on image patches obtained from a small set of face textures (<100) and yet allows us to generate seamless appearance maps of novel identities at 6k×4k resolution. Extensive qualitative and quantitative evaluations and baseline comparisons show the efficacy of our proposed system.en_US
dc.description.number2
dc.description.sectionheadersNeural Texture and Image Synthesis
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume43
dc.identifier.doi10.1111/cgf.15039
dc.identifier.issn1467-8659
dc.identifier.pages14 pages
dc.identifier.urihttps://doi.org/10.1111/cgf.15039
dc.identifier.urihttps://diglib.eg.org/handle/10.1111/cgf15039
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies -> Machine learning; Texturing
dc.subjectComputing methodologies
dc.subjectMachine learning
dc.subjectTexturing
dc.titleGANtlitz: Ultra High Resolution Generative Model for Multi-Modal Face Texturesen_US
Files
Original bundle
Now showing 1 - 5 of 8
No Thumbnail Available
Name:
v43i2_49_15039.pdf
Size:
95.64 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
avatarme_comparison.zip
Size:
27.01 MB
Format:
Zip file
No Thumbnail Available
Name:
eg_paper1093_gantlitz_website.zip
Size:
495.5 MB
Format:
Zip file
No Thumbnail Available
Name:
generated_samples.zip
Size:
175.72 MB
Format:
Zip file
No Thumbnail Available
Name:
modality_completion.zip
Size:
4.29 MB
Format:
Zip file
Collections