USTNet: Unsupervised Shape-to-Shape Translation via Disentangled Representations

dc.contributor.authorWang, Haoranen_US
dc.contributor.authorLi, Jiaxinen_US
dc.contributor.authorTelea, Alexandruen_US
dc.contributor.authorKosinka, JirĂ­en_US
dc.contributor.authorWu, Zizhaoen_US
dc.contributor.editorUmetani, Nobuyukien_US
dc.contributor.editorWojtan, Chrisen_US
dc.contributor.editorVouga, Etienneen_US
dc.date.accessioned2022-10-04T06:39:36Z
dc.date.available2022-10-04T06:39:36Z
dc.date.issued2022
dc.description.abstractWe propose USTNet, a novel deep learning approach designed for learning shape-to-shape translation from unpaired domains in an unsupervised manner. The core of our approach lies in disentangled representation learning that factors out the discriminative features of 3D shapes into content and style codes. Given input shapes from multiple domains, USTNet disentangles their representation into style codes that contain distinctive traits across domains and content codes that contain domaininvariant traits. By fusing the style and content codes of the target and source shapes, our method enables us to synthesize new shapes that resemble the target style and retain the content features of source shapes. Based on the shared style space, our method facilitates shape interpolation by manipulating the style attributes from different domains. Furthermore, by extending the basic building blocks of our network from two-class to multi-class classification, we adapt USTNet to tackle multi-domain shape-to-shape translation. Experimental results show that our approach can generate realistic and natural translated shapes and that our method leads to improved quantitative evaluation metric results compared to 3DSNet. Codes are available at https://Haoran226.github.io/USTNet.en_US
dc.description.number7
dc.description.sectionheadersPoint Cloud Generation
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume41
dc.identifier.doi10.1111/cgf.14664
dc.identifier.issn1467-8659
dc.identifier.pages141-152
dc.identifier.pages12 pages
dc.identifier.urihttps://doi.org/10.1111/cgf.14664
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14664
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Computing methodologies --> Point-based models; Artificial intelligence
dc.subjectComputing methodologies
dc.subjectPoint
dc.subjectbased models
dc.subjectArtificial intelligence
dc.titleUSTNet: Unsupervised Shape-to-Shape Translation via Disentangled Representationsen_US
Files
Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
v41i7pp141-152.pdf
Size:
12.48 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
paper1157_supplemental_material.pdf
Size:
11.41 MB
Format:
Adobe Portable Document Format
Collections