MOVIN: Real-time Motion Capture using a Single LiDAR

dc.contributor.authorJang, Deok-Kyeongen_US
dc.contributor.authorYang, Dongseoken_US
dc.contributor.authorJang, Deok-Yunen_US
dc.contributor.authorChoi, Byeolien_US
dc.contributor.authorJin, Taeilen_US
dc.contributor.authorLee, Sung-Heeen_US
dc.contributor.editorChaine, Raphaëlleen_US
dc.contributor.editorDeng, Zhigangen_US
dc.contributor.editorKim, Min H.en_US
dc.date.accessioned2023-10-09T07:35:45Z
dc.date.available2023-10-09T07:35:45Z
dc.date.issued2023
dc.description.abstractRecent advancements in technology have brought forth new forms of interactive applications, such as the social metaverse, where end users interact with each other through their virtual avatars. In such applications, precise full-body tracking is essential for an immersive experience and a sense of embodiment with the virtual avatar. However, current motion capture systems are not easily accessible to end users due to their high cost, the requirement for special skills to operate them, or the discomfort associated with wearable devices. In this paper, we present MOVIN, the data-driven generative method for real-time motion capture with global tracking, using a single LiDAR sensor. Our autoregressive conditional variational autoencoder (CVAE) model learns the distribution of pose variations conditioned on the given 3D point cloud from LiDAR. As a central factor for high-accuracy motion capture, we propose a novel feature encoder to learn the correlation between the historical 3D point cloud data and global, local pose features, resulting in effective learning of the pose prior. Global pose features include root translation, rotation, and foot contacts, while local features comprise joint positions and rotations. Subsequently, a pose generator takes into account the sampled latent variable along with the features from the previous frame to generate a plausible current pose. Our framework accurately predicts the performer's 3D global information and local joint details while effectively considering temporally coherent movements across frames. We demonstrate the effectiveness of our architecture through quantitative and qualitative evaluations, comparing it against state-of-the-art methods. Additionally, we implement a real-time application to showcase our method in real-world scenarios. MOVIN dataset is available at https://movin3d. github.io/movin_pg2023/.en_US
dc.description.number7
dc.description.sectionheadersMotion Capture and Generation
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume42
dc.identifier.doi10.1111/cgf.14961
dc.identifier.issn1467-8659
dc.identifier.pages12 pages
dc.identifier.urihttps://doi.org/10.1111/cgf.14961
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14961
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Computing methodologies -> Motion capture; Motion processing; Neural networks
dc.subjectComputing methodologies
dc.subjectMotion capture
dc.subjectMotion processing
dc.subjectNeural networks
dc.titleMOVIN: Real-time Motion Capture using a Single LiDARen_US
Files
Original bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
v42i7_33_14961.pdf
Size:
20.16 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
paper1198_mm.mp4
Size:
580.84 MB
Format:
Unknown data format
Collections