Browsing by Author "Ye, Yuting"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Learning to Transfer In-Hand Manipulations Using a Greedy Shape Curriculum(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhang, Yunbo; Clegg, Alexander; Ha, Sehoon; Turk, Greg; Ye, Yuting; Myszkowski, Karol; Niessner, MatthiasIn-hand object manipulation is challenging to simulate due to complex contact dynamics, non-repetitive finger gaits, and the need to indirectly control unactuated objects. Further adapting a successful manipulation skill to new objects with different shapes and physical properties is a similarly challenging problem. In this work, we show that natural and robust in-hand manipulation of simple objects in a dynamic simulation can be learned from a high quality motion capture example via deep reinforcement learning with careful designs of the imitation learning problem. We apply our approach on both single-handed and two-handed dexterous manipulations of diverse object shapes and motions. We then demonstrate further adaptation of the example motion to a more complex shape through curriculum learning on intermediate shapes morphed between the source and target object. While a naive curriculum of progressive morphs often falls short, we propose a simple greedy curriculum search algorithm that can successfully apply to a range of objects such as a teapot, bunny, bottle, train, and elephant.Item Physics-based Motion Retargeting from Sparse Inputs(ACM Association for Computing Machinery, 2023) Reda, Daniele; Won, Jungdam; Ye, Yuting; Panne, Michiel van de; Winkler, Alexander; Wang, Huamin; Ye, Yuting; Victor ZordanAvatars are important to create interactive and immersive experiences in virtual worlds. One challenge in animating these characters to mimic a user’s motion is that commercial AR/VR products consist only of a headset and controllers, providing very limited sensor data of the user’s pose. Another challenge is that an avatar might have a different skeleton structure than a human and the mapping between them is unclear. In this work we address both of these challenges. We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies. Our method uses reinforcement learning to train a policy to control characters in a physics simulator. We only require human motion capture data for training, without relying on artist-generated animations for each avatar. This allows us to use large motion capture datasets to train general policies that can track unseen users from real and sparse data in real-time.We demonstrate the feasibility of our approach on three characters with different skeleton structure: a dinosaur, a mouse-like creature and a human.We show that the avatar poses often match the user surprisingly well, despite having no sensor information of the lower body available. We discuss and ablate the important components in our framework, specifically the kinematic retargeting step, the imitation, contact and action reward as well as our asymmetric actor-critic observations. We further explore the robustness of our method in a variety of settings including unbalancing, dancing and sports motions.