43-Issue 8

Permanent URI for this collection

August 21 to August 23, 2024 | Montreal, Canada
for Posters see SCA 2024 - Posters

Gesture and Gaze Animation
Learning to Play Guitar with Robotic Hands
Chaoyi Luo, Pengbin Tang, Yuqi Ma, and Dongjin Huang
LLAniMAtion: LLAMA Driven Gesture Animation
Jonathan Windle, Iain Matthews, and Sarah Taylor
Reactive Gaze during Locomotion in Natural Environments
Julia K. Melgaré, Damien Rohmer, Soraia R. Musse, and Marie-Paule Cani
Character Animation I: Synthesis and Capture
Diffusion-based Human Motion Style Transfer with Semantic Guidance
Lei Hu, Zihao Zhang, Yongjing Ye, Yiwen Xu, and Shihong Xia
Pose-to-Motion: Cross-Domain Motion Retargeting with Pose Prior
Qingqing Zhao, Peizhuo Li, Wang Yifan, Olga Sorkine-Hornung, and Gordon Wetzstein
Long-term Motion In-betweening via Keyframe Prediction
Seokhyeon Hong, Haemin Kim, Kyungmin Cho, and Junyong Noh
ADAPT: AI-Driven Artefact Purging Technique for IMU Based Motion Capture
Paul Schreiner, Rasmus Netterstrøm, Hang Yin, Sune Darkner, and Kenny Erleben
Character Animation II: Control
Learning to Move Like Professional Counter-Strike Players
David Durst, Feng Xie, Vishnu Sarukkai, Brennan Shacklett, Iuri Frosio, Chen Tessler, Joohwan Kim, Carly Taylor, Gilbert Bernstein, Sanjiban Choudhury, Pat Hanrahan, and Kayvon Fatahalian
PartwiseMPC: Interactive Control of Contact-Guided Motions
Niloofar Khoshsiyar, Ruiyu Gou, Tianhong Zhou, Sheldon Andrews, and Michiel van de Panne
VMP: Versatile Motion Priors for Robustly Tracking Motion on Physical Characters
Agon Serifi, Ruben Grandia, Espen Knoop, Markus Gross, and Moritz Bächer
Animation and Cinematography
SketchAnim: Real-time Sketch Animation Transfer from Videos
Gaurav Rai, Shreyas Gupta, and Ojaswa Sharma
Creating a 3D Mesh in A-pose from a Single Image for Character Rigging
Seunghwan Lee and C. Karen Liu
Garment Animation NeRF with Color Editing
Renke Wang, Meng Zhang, Jun Li, and Jian Yang
Generating Flight Summaries Conforming to Cinematographic Principles
Christophe Lino and Marie-Paule Cani
Physics I: Fluids, Shells, and Natural Phenomena
Multiphase Viscoelastic Non-Newtonian Fluid Simulation
Yalan Zhang, Shen Long, Yanrui Xu, Xiaokun Wang, Chao Yao, Jiri Kosinka, Steffen Frey, Alexandru Telea, and Xiaojuan Ban
Reconstruction of Implicit Surfaces from Fluid Particles Using Convolutional Neural Networks
Chen Zhao, Tamar Shinar, and Craig Schroeder
Unerosion: Simulating Terrain Evolution Back in Time
Zhanyu Yang, Guillaume Cordonnier, Marie-Paule Cani, Christian Perrenoud, and Bedrich Benes
Curved Three-Director Cosserat Shells with Strong Coupling
Fabian Löschner, José Antonio Fernández-Fernández, Stefan Rhys Jeske, and Jan Bender
Physics II: Cutting and Colliding
Generalized eXtended Finite Element Method for Deformable Cutting via Boolean Operations
Quoc-Minh Ton-That, Paul G. Kry, and Sheldon Andrews
Strongly Coupled Simulation of Magnetic Rigid Bodies
Lukas Westhofen, José Antonio Fernández-Fernández, Stefan Rhys Jeske, and Jan Bender
A Multi-layer Solver for XPBD
Alexandre Mercier-Aubin and Paul G. Kry
Robust and Artefact-Free Deformable Contact with Smooth Surface Representations
Yinwei Du, Yue Li, Stelian Coros, and Bernhard Thomaszewski

BibTeX (43-Issue 8)
                
@article{
10.1111:cgf.15188,
journal = {Computer Graphics Forum}, title = {{
Eurographics/ ACM SIGGRAPH Symposium on Computer Animation 2024 - CGF 43-8: Frontmatter}},
author = {
Skouras, Melina
and
Wang, He
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15188}
}
                
@article{
10.1111:cgf.15166,
journal = {Computer Graphics Forum}, title = {{
Learning to Play Guitar with Robotic Hands}},
author = {
Luo, Chaoyi
and
Tang, Pengbin
and
Ma, Yuqi
and
Huang, Dongjin
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15166}
}
                
@article{
10.1111:cgf.15167,
journal = {Computer Graphics Forum}, title = {{
LLAniMAtion: LLAMA Driven Gesture Animation}},
author = {
Windle, Jonathan
and
Matthews, Iain
and
Taylor, Sarah
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15167}
}
                
@article{
10.1111:cgf.15168,
journal = {Computer Graphics Forum}, title = {{
Reactive Gaze during Locomotion in Natural Environments}},
author = {
Melgaré, Julia K.
and
Rohmer, Damien
and
Musse, Soraia R.
and
Cani, Marie-Paule
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15168}
}
                
@article{
10.1111:cgf.15169,
journal = {Computer Graphics Forum}, title = {{
Diffusion-based Human Motion Style Transfer with Semantic Guidance}},
author = {
Hu, Lei
and
Zhang, Zihao
and
Ye, Yongjing
and
Xu, Yiwen
and
Xia, Shihong
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15169}
}
                
@article{
10.1111:cgf.15170,
journal = {Computer Graphics Forum}, title = {{
Pose-to-Motion: Cross-Domain Motion Retargeting with Pose Prior}},
author = {
Zhao, Qingqing
and
Li, Peizhuo
and
Yifan, Wang
and
Sorkine-Hornung, Olga
and
Wetzstein, Gordon
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15170}
}
                
@article{
10.1111:cgf.15171,
journal = {Computer Graphics Forum}, title = {{
Long-term Motion In-betweening via Keyframe Prediction}},
author = {
Hong, Seokhyeon
and
Kim, Haemin
and
Cho, Kyungmin
and
Noh, Junyong
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15171}
}
                
@article{
10.1111:cgf.15172,
journal = {Computer Graphics Forum}, title = {{
ADAPT: AI-Driven Artefact Purging Technique for IMU Based Motion Capture}},
author = {
Schreiner, Paul
and
Netterstrøm, Rasmus
and
Yin, Hang
and
Darkner, Sune
and
Erleben, Kenny
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15172}
}
                
@article{
10.1111:cgf.15173,
journal = {Computer Graphics Forum}, title = {{
Learning to Move Like Professional Counter-Strike Players}},
author = {
Durst, David
and
Xie, Feng
and
Hanrahan, Pat
and
Fatahalian, Kayvon
and
Sarukkai, Vishnu
and
Shacklett, Brennan
and
Frosio, Iuri
and
Tessler, Chen
and
Kim, Joohwan
and
Taylor, Carly
and
Bernstein, Gilbert
and
Choudhury, Sanjiban
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15173}
}
                
@article{
10.1111:cgf.15174,
journal = {Computer Graphics Forum}, title = {{
PartwiseMPC: Interactive Control of Contact-Guided Motions}},
author = {
Khoshsiyar, Niloofar
and
Gou, Ruiyu
and
Zhou, Tianhong
and
Andrews, Sheldon
and
Panne, Michiel van de
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15174}
}
                
@article{
10.1111:cgf.15175,
journal = {Computer Graphics Forum}, title = {{
VMP: Versatile Motion Priors for Robustly Tracking Motion on Physical Characters}},
author = {
Serifi, Agon
and
Grandia, Ruben
and
Knoop, Espen
and
Gross, Markus
and
Bächer, Moritz
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15175}
}
                
@article{
10.1111:cgf.15176,
journal = {Computer Graphics Forum}, title = {{
SketchAnim: Real-time Sketch Animation Transfer from Videos}},
author = {
Rai, Gaurav
and
Gupta, Shreyas
and
Sharma, Ojaswa
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15176}
}
                
@article{
10.1111:cgf.15177,
journal = {Computer Graphics Forum}, title = {{
Creating a 3D Mesh in A-pose from a Single Image for Character Rigging}},
author = {
Lee, Seunghwan
and
Liu, C. Karen
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15177}
}
                
@article{
10.1111:cgf.15178,
journal = {Computer Graphics Forum}, title = {{
Garment Animation NeRF with Color Editing}},
author = {
Wang, Renke
and
Zhang, Meng
and
Li, Jun
and
Yang, Jian
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15178}
}
                
@article{
10.1111:cgf.15179,
journal = {Computer Graphics Forum}, title = {{
Generating Flight Summaries Conforming to Cinematographic Principles}},
author = {
Lino, Christophe
and
Cani, Marie-Paule
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15179}
}
                
@article{
10.1111:cgf.15180,
journal = {Computer Graphics Forum}, title = {{
Multiphase Viscoelastic Non-Newtonian Fluid Simulation}},
author = {
Zhang, Yalan
and
Long, Shen
and
Xu, Yanrui
and
Wang, Xiaokun
and
Yao, Chao
and
Kosinka, Jiri
and
Frey, Steffen
and
Telea, Alexandru
and
Ban, Xiaojuan
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15180}
}
                
@article{
10.1111:cgf.15181,
journal = {Computer Graphics Forum}, title = {{
Reconstruction of Implicit Surfaces from Fluid Particles Using Convolutional Neural Networks}},
author = {
Zhao, Chen
and
Shinar, Tamar
and
Schroeder, Craig
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15181}
}
                
@article{
10.1111:cgf.15182,
journal = {Computer Graphics Forum}, title = {{
Unerosion: Simulating Terrain Evolution Back in Time}},
author = {
Yang, Zhanyu
and
Cordonnier, Guillaume
and
Cani, Marie-Paule
and
Perrenoud, Christian
and
Benes, Bedrich
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15182}
}
                
@article{
10.1111:cgf.15183,
journal = {Computer Graphics Forum}, title = {{
Curved Three-Director Cosserat Shells with Strong Coupling}},
author = {
Löschner, Fabian
and
Fernández-Fernández, José Antonio
and
Jeske, Stefan Rhys
and
Bender, Jan
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15183}
}
                
@article{
10.1111:cgf.15184,
journal = {Computer Graphics Forum}, title = {{
Generalized eXtended Finite Element Method for Deformable Cutting via Boolean Operations}},
author = {
Ton-That, Quoc-Minh
and
Kry, Paul G.
and
Andrews, Sheldon
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15184}
}
                
@article{
10.1111:cgf.15185,
journal = {Computer Graphics Forum}, title = {{
Strongly Coupled Simulation of Magnetic Rigid Bodies}},
author = {
Westhofen, Lukas
and
Fernández-Fernández, José Antonio
and
Jeske, Stefan Rhys
and
Bender, Jan
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15185}
}
                
@article{
10.1111:cgf.15186,
journal = {Computer Graphics Forum}, title = {{
A Multi-layer Solver for XPBD}},
author = {
Mercier-Aubin, Alexandre
and
Kry, Paul G.
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15186}
}
                
@article{
10.1111:cgf.15187,
journal = {Computer Graphics Forum}, title = {{
Robust and Artefact-Free Deformable Contact with Smooth Surface Representations}},
author = {
Du, Yinwei
and
Li, Yue
and
Coros, Stelian
and
Thomaszewski, Bernhard
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15187}
}

Browse

Recent Submissions

Now showing 1 - 23 of 23
  • Item
    Eurographics/ ACM SIGGRAPH Symposium on Computer Animation 2024 - CGF 43-8: Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Skouras, Melina; Wang, He; Skouras, Melina; Wang, He
  • Item
    Learning to Play Guitar with Robotic Hands
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Luo, Chaoyi; Tang, Pengbin; Ma, Yuqi; Huang, Dongjin; Skouras, Melina; Wang, He
    Playing the guitar is a dexterous human skill that poses significant challenges in computer graphics and robotics due to the precision required in finger positioning and coordination between hands. Current methods often rely on motion capture data to replicate specific guitar playing segments, which restricts the range of performances and demands intricate post-processing. In this paper, we introduce a novel reinforcement learning model that can play the guitar using robotic hands, without the need for motion capture datasets, from input tablatures. To achieve this, we divide the simulation task for playing guitar into three stages. (a): for an input tablature, we first generate corresponding fingerings that align with human habits. (b): based on the generated fingerings as the guidance, we train a neural network for controlling the fingers of the left hand using deep reinforcement learning, and (c): we generate plucking movements for the right hand based on inverse kinematics according to the tablature. We evaluate our method by employing precision, recall, and F1 scores as quantitative metrics to thoroughly assess its performance in playing musical notes. In addition, we conduct qualitative analysis through user studies to evaluate the visual and auditory effects of guitar performance. The results demonstrate that our model excels in playing most moderately difficult and easier musical pieces, accurately playing nearly all notes.
  • Item
    LLAniMAtion: LLAMA Driven Gesture Animation
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Windle, Jonathan; Matthews, Iain; Taylor, Sarah; Skouras, Melina; Wang, He
    Co-speech gesturing is an important modality in conversation, providing context and social cues. In character animation, appropriate and synchronised gestures add realism, and can make interactive agents more engaging. Historically, methods for automatically generating gestures were predominantly audio-driven, exploiting the prosodic and speech-related content that is encoded in the audio signal. In this paper we instead experiment with using Large-Language Model (LLM) features for gesture generation that are extracted from text using LLAMA2. We compare against audio features, and explore combining the two modalities in both objective tests and a user study. Surprisingly, our results show that LLAMA2 features on their own perform significantly better than audio features and that including both modalities yields no significant difference to using LLAMA2 features in isolation. We demonstrate that the LLAMA2 based model can generate both beat and semantic gestures without any audio input, suggesting LLMs can provide rich encodings that are well suited for gesture generation.
  • Item
    Reactive Gaze during Locomotion in Natural Environments
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Melgaré, Julia K.; Rohmer, Damien; Musse, Soraia R.; Cani, Marie-Paule; Skouras, Melina; Wang, He
    Animating gaze behavior is crucial for creating believable virtual characters, providing insights into their perception and interaction with the environment. In this paper, we present an efficient yet natural-looking gaze animation model applicable to real-time walking characters exploring natural environments. We address the challenge of dynamic gaze adaptation by combining findings from neuroscience with a data-driven saliency model. Specifically, our model determines gaze focus by considering the character's locomotion, environment stimuli, and terrain conditions. Our model is compatible with both automatic navigation through pre-defined character trajectories and user-guided interactive locomotion, and can be configured according to the desired degree of visual exploration of the environment. Our perceptual evaluation shows that our solution significantly improves the state-of-the-art saliency-based gaze animation with respect to the character's apparent awareness of the environment, the naturalness of the motion, and the elements to which it pays attention.
  • Item
    Diffusion-based Human Motion Style Transfer with Semantic Guidance
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Hu, Lei; Zhang, Zihao; Ye, Yongjing; Xu, Yiwen; Xia, Shihong; Skouras, Melina; Wang, He
    3D Human motion style transfer is a fundamental problem in computer graphic and animation processing. Existing AdaINbased methods necessitate datasets with balanced style distribution and content/style labels to train the clustered latent space. However, we may encounter a single unseen style example in practical scenarios, but not in sufficient quantity to constitute a style cluster for AdaIN-based methods. Therefore, in this paper, we propose a novel two-stage framework for few-shot style transfer learning based on the diffusion model. Specifically, in the first stage, we pre-train a diffusion-based text-to-motion model as a generative prior so that it can cope with various content motion inputs. In the second stage, based on the single style example, we fine-tune the pre-trained diffusion model in a few-shot manner to make it capable of style transfer. The key idea is regarding the reverse process of diffusion as a motion-style translation process since the motion styles can be viewed as special motion variations. During the fine-tuning for style transfer, a simple yet effective semantic-guided style transfer loss coordinated with style example reconstruction loss is introduced to supervise the style transfer in CLIP semantic space. The qualitative and quantitative evaluations demonstrate that our method can achieve state-of-the-art performance and has practical applications. The source code is available at https://github.com/hlcdyy/diffusion-based-motion-style-transfer.
  • Item
    Pose-to-Motion: Cross-Domain Motion Retargeting with Pose Prior
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhao, Qingqing; Li, Peizhuo; Yifan, Wang; Sorkine-Hornung, Olga; Wetzstein, Gordon; Skouras, Melina; Wang, He
    Creating plausible motions for a diverse range of characters is a long-standing goal in computer graphics. Current learningbased motion synthesis methods rely on large-scale motion datasets, which are often difficult if not impossible to acquire. On the other hand, pose data is more accessible, since static posed characters are easier to create and can even be extracted from images using recent advancements in computer vision. In this paper, we tap into this alternative data source and introduce a neural motion synthesis approach through retargeting, which generates plausible motion of various characters that only have pose data by transferring motion from one single existing motion capture dataset of another drastically different characters. Our experiments show that our method effectively combines the motion features of the source character with the pose features of the target character, and performs robustly with small or noisy pose data sets, ranging from a few artist-created poses to noisy poses estimated directly from images. Additionally, a conducted user study indicated that a majority of participants found our retargeted motion to be more enjoyable to watch, more lifelike in appearance, and exhibiting fewer artifacts. Our code and dataset can be accessed here.
  • Item
    Long-term Motion In-betweening via Keyframe Prediction
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Hong, Seokhyeon; Kim, Haemin; Cho, Kyungmin; Noh, Junyong; Skouras, Melina; Wang, He
    Motion in-betweening has emerged as a promising approach to enhance the efficiency of motion creation due to its flexibility and time performance. However, previous in-betweening methods are limited to generating short transitions due to growing pose ambiguity when the number of missing frames increases. This length-related constraint makes the optimization hard and it further causes another constraint on the target pose, limiting the degrees of freedom for artists to use. In this paper, we introduce a keyframe-driven approach that effectively solves the pose ambiguity problem, allowing robust in-betweening performance on various lengths of missing frames. To incorporate keyframe-driven motion synthesis, we introduce a keyframe score that measures the likelihood of a frame being used as a keyframe as well as an adaptive keyframe selection method that maintains appropriate temporal distances between resulting keyframes. Additionally, we employ phase manifolds to further resolve the pose ambiguity and incorporate trajectory conditions to guide the approximate movement of the character. Comprehensive evaluations, encompassing both quantitative and qualitative analyses, were conducted to compare our method with state-of-theart in-betweening approaches across various transition lengths. The code for the paper is available at https://github. com/seokhyeonhong/long-mib
  • Item
    ADAPT: AI-Driven Artefact Purging Technique for IMU Based Motion Capture
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Schreiner, Paul; Netterstrøm, Rasmus; Yin, Hang; Darkner, Sune; Erleben, Kenny; Skouras, Melina; Wang, He
    While IMU based motion capture offers a cost-effective alternative to premium camera-based systems, it often falls short in matching the latter's realism. Common distortions, such as self-penetrating body parts, foot skating, and floating, limit the usability of these systems, particularly for high-end users. To address this, we employed reinforcement learning to train an AI agent that mimics erroneous sample motion. Since our agent operates within a simulated environment, it inherently avoids generating these distortions since it must adhere to the laws of physics. Impressively, the agent manages to mimic the sample motions while preserving their distinctive characteristics. We assessed our method's efficacy across various types of input data, showcasing an ideal blend of artefact-laden IMU-based data with high-grade optical motion capture data. Furthermore, we compared the configuration of observation and action spaces with other implementations, pinpointing the most suitable configuration for our purposes. All our models underwent rigorous evaluation using a spectrum of quantitative metrics complemented by a qualitative review. These evaluations were performed using a benchmark dataset of IMU-based motion data from actors not included in the training data.
  • Item
    Learning to Move Like Professional Counter-Strike Players
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Durst, David; Xie, Feng; Sarukkai, Vishnu; Shacklett, Brennan; Frosio, Iuri; Tessler, Chen; Kim, Joohwan; Taylor, Carly; Bernstein, Gilbert; Choudhury, Sanjiban; Hanrahan, Pat; Fatahalian, Kayvon; Skouras, Melina; Wang, He
    In multiplayer, first-person shooter games like Counter-Strike: Global Offensive (CS:GO), coordinated movement is a critical component of high-level strategic play. However, the complexity of team coordination and the variety of conditions present in popular game maps make it impractical to author hand-crafted movement policies for every scenario. We show that it is possible to take a data-driven approach to creating human-like movement controllers for CS:GO. We curate a team movement dataset comprising 123 hours of professional game play traces, and use this dataset to train a transformer-based movement model that generates human-like team movement for all players in a ''Retakes'' round of the game. Importantly, the movement prediction model is efficient. Performing inference for all players takes less than 0.5 ms per game step (amortized cost) on a single CPU core, making it plausible for use in commercial games today. Human evaluators assess that our model behaves more like humans than both commercially-available bots and procedural movement controllers scripted by experts (16% to 59% higher by TrueSkill rating of ''human-like''). Using experiments involving in-game bot vs. bot self-play, we demonstrate that our model performs simple forms of teamwork, makes fewer common movement mistakes, and yields movement distributions, player lifetimes, and kill locations similar to those observed in professional CS:GO match play.
  • Item
    PartwiseMPC: Interactive Control of Contact-Guided Motions
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Khoshsiyar, Niloofar; Gou, Ruiyu; Zhou, Tianhong; Andrews, Sheldon; Panne, Michiel van de; Skouras, Melina; Wang, He
    Physics-based character motions remain difficult to create and control.We make two contributions towards simpler specification and faster generation of physics-based control. First, we introduce a novel partwise model predictive control (MPC) method that exploits independent planning for body parts when this proves beneficial, while defaulting to whole-body motion planning when that proves to be more effective. Second, we introduce a new approach to motion specification, based on specifying an ordered set of contact keyframes. These each specify a small number of pairwise contacts between the body and the environment, and serve as loose specifications of motion strategies. Unlike regular keyframes or traditional trajectory optimization constraints, they are heavily under-constrained and have flexible timing. We demonstrate a range of challenging contact-rich motions that can be generated online at interactive rates using this framework. We further show the generalization capabilities of the method.
  • Item
    VMP: Versatile Motion Priors for Robustly Tracking Motion on Physical Characters
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Serifi, Agon; Grandia, Ruben; Knoop, Espen; Gross, Markus; Bächer, Moritz; Skouras, Melina; Wang, He
    Recent progress in physics-based character control has made it possible to learn policies from unstructured motion data. However, it remains challenging to train a single control policy that works with diverse and unseen motions, and can be deployed to real-world physical robots. In this paper, we propose a two-stage technique that enables the control of a character with a full-body kinematic motion reference, with a focus on imitation accuracy. In a first stage, we extract a latent space encoding by training a variational autoencoder, taking short windows of motion from unstructured data as input. We then use the embedding from the time-varying latent code to train a conditional policy in a second stage, providing a mapping from kinematic input to dynamics-aware output. By keeping the two stages separate, we benefit from self-supervised methods to get better latent codes and explicit imitation rewards to avoid mode collapse. We demonstrate the efficiency and robustness of our method in simulation, with unseen user-specified motions, and on a bipedal robot, where we bring dynamic motions to the real world.
  • Item
    SketchAnim: Real-time Sketch Animation Transfer from Videos
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Rai, Gaurav; Gupta, Shreyas; Sharma, Ojaswa; Skouras, Melina; Wang, He
    Animation of hand-drawn sketches is an adorable art. It allows the animator to generate animations with expressive freedom and requires significant expertise. In this work, we introduce a novel sketch animation framework designed to address inherent challenges, such as motion extraction, motion transfer, and occlusion. The framework takes an exemplar video input featuring a moving object and utilizes a robust motion transfer technique to animate the input sketch. We show comparative evaluations that demonstrate the superior performance of our method over existing sketch animation techniques. Notably, our approach exhibits a higher level of user accessibility in contrast to conventional sketch-based animation systems, positioning it as a promising contributor to the field of sketch animation. https://graphics-research-group.github.io/SketchAnim/
  • Item
    Creating a 3D Mesh in A-pose from a Single Image for Character Rigging
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Lee, Seunghwan; Liu, C. Karen; Skouras, Melina; Wang, He
    Learning-based methods for 3D content generation have shown great potential to create 3D characters from text prompts, videos, and images. However, current methods primarily focus on generating static 3D meshes, overlooking the crucial aspect of creating an animatable 3D meshes. Directly using 3D meshes generated by existing methods to create underlying skeletons for animation presents many challenges because the generated mesh might exhibit geometry artifacts or assume arbitrary poses that complicate the subsequent rigging process. This work proposes a new framework for generating a 3D animatable mesh from a single 2D image depicting the character. We do so by enforcing the generated 3D mesh to assume an A-pose, which can mitigate the geometry artifacts and facilitate the use of existing automatic rigging methods. Our approach aims to leverage the generative power of existing models across modalities without the need for new data or large-scale training. We evaluate the effectiveness of our framework with qualitative results, as well as ablation studies and quantitative comparisons with existing 3D mesh generation models.
  • Item
    Garment Animation NeRF with Color Editing
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Wang, Renke; Zhang, Meng; Li, Jun; Yang, Jian; Skouras, Melina; Wang, He
    Generating high-fidelity garment animations through traditional workflows, from modeling to rendering, is both tedious and expensive. These workflows often require repetitive steps in response to updates in character motion, rendering viewpoint changes, or appearance edits. Although recent neural rendering offers an efficient solution for computationally intensive processes, it struggles with rendering complex garment animations containing fine wrinkle details and realistic garment-and-body occlusions, while maintaining structural consistency across frames and dense view rendering. In this paper, we propose a novel approach to directly synthesize garment animations from body motion sequences without the need for an explicit garment proxy. Our approach infers garment dynamic features from body motion, providing a preliminary overview of garment structure. Simultaneously, we capture detailed features from synthesized reference images of the garment's front and back, generated by a pre-trained image model. These features are then used to construct a neural radiance field that renders the garment animation video. Additionally, our technique enables garment recoloring by decomposing its visual elements. We demonstrate the generalizability of our method across unseen body motions and camera views, ensuring detailed structural consistency. Furthermore, we showcase its applicability to color editing on both real and synthetic garment data. Compared to existing neural rendering techniques, our method exhibits qualitative and quantitative improvements in garment dynamics and wrinkle detail modeling. Code is available at https://github.com/wrk226/GarmentAnimationNeRF.
  • Item
    Generating Flight Summaries Conforming to Cinematographic Principles
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Lino, Christophe; Cani, Marie-Paule; Skouras, Melina; Wang, He
    We propose an automatic method for generating flight summaries of prescribed duration, given any planed 3D trajectory of a flying object. The challenge is to select relevant time-ellipses, while keeping and adequately framing the most interesting parts of the trajectory, and enforcing cinematographic rules between the selected shots. Our solution optimizes the visual quality of the output video both in terms of camera view and film editing choices, thanks to a new optimization technique, designed to jointly optimize the selection of the interesting parts of a flight, and the camera animation parameters over time. To our best knowledge, this solution is the first one to address camera control, film editing, and trajectory summarizing at once. Ablation studies demonstrate the visual quality of the flights summaries we generate compared to alternative methods.
  • Item
    Multiphase Viscoelastic Non-Newtonian Fluid Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Yalan; Long, Shen; Xu, Yanrui; Wang, Xiaokun; Yao, Chao; Kosinka, Jiri; Frey, Steffen; Telea, Alexandru; Ban, Xiaojuan; Skouras, Melina; Wang, He
    We propose an SPH-based method for simulating viscoelastic non-Newtonian fluids within a multiphase framework. For this, we use mixture models to handle component transport and conformation tensor methods to handle the fluid's viscoelastic stresses. In addition, we consider a bonding effects network to handle the impact of microscopic chemical bonds on phase transport. Our method supports the simulation of both steady-state viscoelastic fluids and discontinuous shear behavior. Compared to previous work on single-phase viscous non-Newtonian fluids, our method can capture more complex behavior, including material mixing processes that generate non-Newtonian fluids. We adopt a uniform set of variables to describe shear thinning, shear thickening, and ordinary Newtonian fluids while automatically calculating local rheology in inhomogeneous solutions. In addition, our method can simulate large viscosity ranges under explicit integration schemes, which typically requires implicit viscosity solvers under earlier single-phase frameworks.
  • Item
    Reconstruction of Implicit Surfaces from Fluid Particles Using Convolutional Neural Networks
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhao, Chen; Shinar, Tamar; Schroeder, Craig; Skouras, Melina; Wang, He
    In this paper, we present a novel network-based approach for reconstructing signed distance functions from fluid particles. The method uses a weighting kernel to transfer particles to a regular grid, which forms the input to a convolutional neural network. We propose a regression-based regularization to reduce surface noise without penalizing high-curvature features. The reconstruction exhibits improved spatial surface smoothness and temporal coherence compared with existing state of the art surface reconstruction methods. The method is insensitive to particle sampling density and robustly handles thin features, isolated particles, and sharp edges.
  • Item
    Unerosion: Simulating Terrain Evolution Back in Time
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Yang, Zhanyu; Cordonnier, Guillaume; Cani, Marie-Paule; Perrenoud, Christian; Benes, Bedrich; Skouras, Melina; Wang, He
    While the past of terrain cannot be known precisely because an effect can result from many different causes, exploring these possible pasts opens the way to numerous applications ranging from movies and games to paleogeography. We introduce unerosion, an attempt to recover plausible past topographies from an input terrain represented as a height field. Our solution relies on novel algorithms for the backward simulation of different processes: fluvial erosion, sedimentation, and thermal erosion. This is achieved by re-formulating the equations of erosion and sedimentation so that they can be simulated back in time. These algorithms can be combined to account for a succession of climate changes backward in time, while the possible ambiguities provide editing options to the user. Results show that our solution can approximately reverse different types of erosion while enabling users to explore a variety of alternative pasts. Using a chronology of climatic periods to inform us about the main erosion phenomena, we also went back in time using real measured terrain data. We checked the consistency with geological findings, namely the height of river beds hundreds of thousands of years ago.
  • Item
    Curved Three-Director Cosserat Shells with Strong Coupling
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Löschner, Fabian; Fernández-Fernández, José Antonio; Jeske, Stefan Rhys; Bender, Jan; Skouras, Melina; Wang, He
    Continuum-based shell models are an established approach for the simulation of thin deformables in computer graphics. However, existing research in physically-based animation is mostly focused on shear-rigid Kirchhoff-Love shells. In this work we explore three-director Cosserat (micropolar) shells which introduce additional rotational degrees of freedom. This microrotation field models transverse shearing and in-plane drilling rotations. We propose an incremental potential formulation of the Cosserat shell dynamics which allows for strong coupling with frictional contact and other physical systems. We evaluate a corresponding finite element discretization for non-planar shells using second-order elements which alleviates shear-locking and permits simulation of curved geometries. Our formulation and the discretization, in particular of the rotational degrees of freedom, is designed to integrate well with typical simulation approaches in physically-based animation. While the discretization of the rotations requires some care, we demonstrate that they do not pose significant numerical challenges in Newton's method. In our experiments we also show that the codimensional shell model is consistent with the respective three-dimensional model. We qualitatively compare our formulation with Kirchhoff-Love shells and demonstrate intriguing use cases for the additional modes of control over dynamic deformations offered by the Cosserat model such as directly prescribing rotations or angular velocities and influencing the shell's curvature.
  • Item
    Generalized eXtended Finite Element Method for Deformable Cutting via Boolean Operations
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Ton-That, Quoc-Minh; Kry, Paul G.; Andrews, Sheldon; Skouras, Melina; Wang, He
    Traditional mesh-based methods for cutting deformable bodies rely on modifying the simulation mesh by deleting, duplicating, deforming or subdividing its elements. Unfortunately, such topological changes eventually lead to instability, reduced accuracy, or computational efficiency challenges. Hence, state of the art algorithms favor the extended finite element method (XFEM), which decouples the cut geometry from the simulation mesh, allowing for stable and accurate cuts at an additional computational cost that is local to the cut region. However, in the 3-dimensional setting, current XFEM frameworks are limited by the cutting configurations that they support. In particular, intersecting cuts are either prohibited or require sophisticated special treatment. Our work presents a general XFEM formulation that is applicable to the 1-, 2-, and 3-dimensional setting without sacrificing the desirable properties of the method. In particular, we propose a generalized enrichment which supports multiple intersecting cuts of various degrees of non-linearity by leveraging recent advances in robust mesh-Boolean technology. This novel strategy additionally enables analytic discontinuous integration schemes required to compute mass, force and elastic energy. We highlight the simplicity, expressivity and accuracy of our XFEM implementation across various scenarios in which intersecting cutting patterns are featured.
  • Item
    Strongly Coupled Simulation of Magnetic Rigid Bodies
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Westhofen, Lukas; Fernández-Fernández, José Antonio; Jeske, Stefan Rhys; Bender, Jan; Skouras, Melina; Wang, He
    We present a strongly coupled method for the robust simulation of linear magnetic rigid bodies. Our approach describes the magnetic effects as part of an incremental potential function. This potential is inserted into the reformulation of the equations of motion for rigid bodies as an optimization problem. For handling collision and friction, we lean on the Incremental Potential Contact (IPC) method. Furthermore, we provide a novel, hybrid explicit / implicit time integration scheme for the magnetic potential based on a distance criterion. This reduces the fill-in of the energy Hessian in cases where the change in magnetic potential energy is small, leading to a simulation speedup without compromising the stability of the system. The resulting system yields a strongly coupled method for the robust simulation of magnetic effects. We showcase the robustness in theory by analyzing the behavior of the magnetic attraction against the contact resolution. Furthermore, we display stability in practice by simulating exceedingly strong and arbitrarily shaped magnets. The results are free of artifacts like bouncing for time step sizes larger than with the equivalent weakly coupled approach. Finally, we showcase the utility of our method in different scenarios with complex joints and numerous magnets.
  • Item
    A Multi-layer Solver for XPBD
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Mercier-Aubin, Alexandre; Kry, Paul G.; Skouras, Melina; Wang, He
    We present a novel multi-layer method for extended position-based dynamics that exploits a sequence of reduced models consisting of rigid and elastic parts to speed up convergence. Taking inspiration from concepts like adaptive rigidification and long-range constraints, we automatically generate different rigid bodies at each layer based on the current strain rate. During the solve, the rigid bodies provide coupling between progressively less distant vertices during layer iterations, and therefore the fully elastic iterations at the final layer start from a lower residual error. Our layered approach likewise helps with the treatment of contact, where the mixed solves of both rigid and elastic in the layers permit fast propagation of impacts. We show several experiments that guide the selection of parameters of the solver, including the number of layers, the iterations per layers, as well as the choice of rigid patterns. Overall, our results show lower compute times for achieving a desired residual reduction across a variety of simulation models and scenarios.
  • Item
    Robust and Artefact-Free Deformable Contact with Smooth Surface Representations
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Du, Yinwei; Li, Yue; Coros, Stelian; Thomaszewski, Bernhard; Skouras, Melina; Wang, He
    Modeling contact between deformable solids is a fundamental problem in computer animation, mechanical design, and robotics. Existing methods based on C0-discretizations-piece-wise linear or polynomial surfaces-suffer from discontinuities and irregularities in tangential contact forces, which can significantly affect simulation outcomes and even prevent convergence. In this work, we show that these limitations can be overcome with a smooth surface representation based on Implicit Moving Least Squares (IMLS). In particular, we propose a self collision detection scheme tailored to IMLS surfaces that enables robust and efficient handling of challenging self contacts. Through a series of test cases, we show that our approach offers advantages over existing methods in terms of accuracy and robustness for both forward and inverse problems.