NeuroDog: Quadruped Embodiment using Neural Networks

dc.contributor.authorEgan, Dónalen_US
dc.contributor.authorCosker, Darrenen_US
dc.contributor.authorMcDonnell, Rachelen_US
dc.contributor.editorWang, Huaminen_US
dc.contributor.editorYe, Yutingen_US
dc.contributor.editorVictor Zordanen_US
dc.date.accessioned2023-10-16T12:33:19Z
dc.date.available2023-10-16T12:33:19Z
dc.date.issued2023
dc.description.abstractVirtual reality (VR) allows us to immerse ourselves in alternative worlds in which we can embody avatars to take on new identities. Usually, these avatars are humanoid or possess very strong anthropomorphic qualities. Allowing users of VR to embody non-humanoid virtual characters or animals presents additional challenges. Extreme morphological differences and the complexities of different characters’ motions can make the construction of a real-time mapping between input human motion and target character motion a difficult challenge. Previous animal embodiment work has focused on direct mapping of human motion to the target animal via inverse kinematics. This can lead to the target animal moving in a way which is inappropriate or unnatural for the animal type. We present a novel real-time method, incorporating two neural networks, for mapping human motion to realistic quadruped motion. Crucially, the output quadruped motions are realistic, while also being faithful to the input user motions.We incorporate our mapping into a VR embodiment system in which users can embody a virtual quadruped from a first person perspective. Further, we evaluate our system via a perceptual experiment in which we investigate the quality of the synthesised motion, the system’s response to user input and the sense of embodiment experienced by users. The main findings of the study are that the system responds as well as traditional embodiment systems to user input, produces higher quality motion and users experience a higher sense of body ownership when compared to a baseline method in which the human to quadruped motion mapping relies solely on inverse kinematics. Finally, our embodiment system relies solely on consumer-grade hardware, thus making it appropriate for use in applications such as VR gaming or VR social platforms.en_US
dc.description.number3
dc.description.sectionheadersCharacter Synthesis
dc.description.seriesinformationProceedings of the ACM on Computer Graphics and Interactive Techniques
dc.description.volume6
dc.identifier.doi10.1145/3606936
dc.identifier.issn2577-6193
dc.identifier.urihttps://doi.org/10.1145/3606936
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1145/3606936
dc.publisherACM Association for Computing Machineryen_US
dc.subjectCCS Concepts: Computing methodologies -> Motion capture Additional KeyWords and Phrases: VR embodiment, Quadruped embodiment, Motion synthesis, Deep Learning, Motion Capture, Perception"
dc.subjectComputing methodologies
dc.subjectMotion capture Additional KeyWords and Phrases
dc.subjectVR embodiment
dc.subjectQuadruped embodiment
dc.subjectMotion synthesis
dc.subjectDeep Learning
dc.subjectMotion Capture
dc.subjectPerception"
dc.titleNeuroDog: Quadruped Embodiment using Neural Networksen_US
Files