Shape Transformers: Topology-Independent 3D Shape Models Using Transformers

dc.contributor.authorChandran, Prashanthen_US
dc.contributor.authorZoss, Gasparden_US
dc.contributor.authorGross, Markusen_US
dc.contributor.authorGotardo, Pauloen_US
dc.contributor.authorBradley, Dereken_US
dc.contributor.editorChaine, Raphaëlleen_US
dc.contributor.editorKim, Min H.en_US
dc.date.accessioned2022-04-22T06:27:51Z
dc.date.available2022-04-22T06:27:51Z
dc.date.issued2022
dc.description.abstractParametric 3D shape models are heavily utilized in computer graphics and vision applications to provide priors on the observed variability of an object's geometry (e.g., for faces). Original models were linear and operated on the entire shape at once. They were later enhanced to provide localized control on different shape parts separately. In deep shape models, nonlinearity was introduced via a sequence of fully-connected layers and activation functions, and locality was introduced in recent models that use mesh convolution networks. As common limitations, these models often dictate, in one way or another, the allowed extent of spatial correlations and also require that a fixed mesh topology be specified ahead of time. To overcome these limitations, we present Shape Transformers, a new nonlinear parametric 3D shape model based on transformer architectures. A key benefit of this new model comes from using the transformer's self-attention mechanism to automatically learn nonlinear spatial correlations for a class of 3D shapes. This is in contrast to global models that correlate everything and local models that dictate the correlation extent. Our transformer 3D shape autoencoder is a better alternative to mesh convolution models, which require specially-crafted convolution, and down/up-sampling operators that can be difficult to design. Our model is also topologically independent: it can be trained once and then evaluated on any mesh topology, unlike most previous methods. We demonstrate the application of our model to different datasets, including 3D faces, 3D hand shapes and full human bodies. Our experiments demonstrate the strong potential of our Shape Transformer model in several applications in computer graphics and vision.en_US
dc.description.number2
dc.description.sectionheadersHuman Animation and Topology
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume41
dc.identifier.doi10.1111/cgf.14468
dc.identifier.issn1467-8659
dc.identifier.pages195-207
dc.identifier.pages13 pages
dc.identifier.urihttps://doi.org/10.1111/cgf.14468
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14468
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Computing methodologies --> Shape modeling; Modeling methodologies
dc.subjectComputing methodologies
dc.subjectShape modeling
dc.subjectModeling methodologies
dc.titleShape Transformers: Topology-Independent 3D Shape Models Using Transformersen_US
Files
Original bundle
Now showing 1 - 3 of 3
Loading...
Thumbnail Image
Name:
v41i2pp195-207.pdf
Size:
19.41 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
shape_transformers-final.mp4
Size:
292.62 MB
Format:
Unknown data format
Loading...
Thumbnail Image
Name:
shape_transformers-supplemental.pdf
Size:
9.97 MB
Format:
Adobe Portable Document Format
Collections