DeepGarment: 3D Garment Shape Estimation from a Single Image

dc.contributor.authorDanerek, Radeken_US
dc.contributor.authorDibra, Endrien_US
dc.contributor.authorÖztireli, A. Cengizen_US
dc.contributor.authorZiegler, Remoen_US
dc.contributor.authorGross, Markusen_US
dc.contributor.editorLoic Barthe and Bedrich Benesen_US
dc.date.accessioned2017-04-22T16:26:37Z
dc.date.available2017-04-22T16:26:37Z
dc.date.issued2017
dc.description.abstract3D garment capture is an important component for various applications such as free-view point video, virtual avatars, online shopping, and virtual cloth fitting. Due to the complexity of the deformations, capturing 3D garment shapes requires controlled and specialized setups. A viable alternative is image-based garment capture. Capturing 3D garment shapes from a single image, however, is a challenging problem and the current solutions come with assumptions on the lighting, camera calibration, complexity of human or mannequin poses considered, and more importantly a stable physical state for the garment and the underlying human body. In addition, most of the works require manual interaction and exhibit high run-times. We propose a new technique that overcomes these limitations, making garment shape estimation from an image a practical approach for dynamic garment capture. Starting from synthetic garment shape data generated through physically based simulations from various human bodies in complex poses obtained through Mocap sequences, and rendered under varying camera positions and lighting conditions, our novel method learns a mapping from rendered garment images to the underlying 3D garment model. This is achieved by training Convolutional Neural Networks (CNN-s) to estimate 3D vertex displacements from a template mesh with a specialized loss function. We illustrate that this technique is able to recover the global shape of dynamic 3D garments from a single image under varying factors such as challenging human poses, self occlusions, various camera poses and lighting conditions, at interactive rates. Improvement is shown if more than one view is integrated. Additionally, we show applications of our method to videos.en_US
dc.description.number2
dc.description.sectionheadersPhysics in Animation
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume36
dc.identifier.doi10.1111/cgf.13125
dc.identifier.issn1467-8659
dc.identifier.pages269-280
dc.identifier.urihttps://doi.org/10.1111/cgf.13125
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf13125
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectI.3.5 [Gomputer Graphics]
dc.subjectComputational Geometry and Object Modeling
dc.subjectI.3.7 [Computer Graphics]
dc.subjectThree Dimensional Graphics and Realism
dc.titleDeepGarment: 3D Garment Shape Estimation from a Single Imageen_US
Files
Collections