41-Issue 8

Permanent URI for this collection

THE 21st ACM SIGGRAPH / Eurographics SYMPOSIUM ON COMPUTER ANIMATION (SCA 2022)

Durham University, UK & Online
13th - 15th September, 2022


Animation and Simulation Techniques I
Physically Based Shape Matching
Matthias Müller, Miles Macklin, Nuttapong Chentanez, and Stefan Jeschke
Fast Numerical Coarsening with Local Factorizations
Zhongyun He, Jesús Pérez, and Miguel A. Otaduy
Stability Analysis of Explicit MPM
Song Bai and Craig Schroeder
Wassersplines for Neural Vector Field-Controlled Animation
Paul Zhang, Dmitriy Smirnov, and Justin Solomon
Voronoi Filters for Simulation Enrichment
Juan J. Casafranca and Miguel A. Otaduy
Animation and Simulation Techniques II
Differentiable Simulation for Outcome-Driven Orthognathic Surgery Planning
Daniel Dorda, Daniel Peter, Dominik Borer, Niko Benjamin Huber, Irena Sailer, Markus Gross, Barbara Solenthaler, and Bernhard Thomaszewski
High-Order Elasticity Interpolants for Microstructure Simulation
Antoine Chan-Lock, Jesús Pérez, and Miguel A. Otaduy
Surface-Only Dynamic Deformables using a Boundary Element Method
Ryusuke Sugimoto, Christopher Batty, and Toshiya Hachisuka
A Second Order Cone Programming Approach for Simulating Biphasic Materials
Pengbin Tang, Stelian Coros, and Bernhard Thomaszewski
A Second-Order Explicit Pressure Projection Method for Eulerian Fluid Simulation
Junwei Jiang, Xiangda Shen, Yuning Gong, Zeng Fan, Yanli Liu, Guanyu Xing, Xiaohua Ren, and Yanci Zhang
Motion I
Combining Motion Matching and Orientation Prediction to Animate Avatars for Consumer-Grade VR Devices
Jose Luis Ponton, Haoran Yun, Carlos Andujar, and Nuria Pelechano
Sketching Vocabulary for Crowd Motion
C. D. Tharindu Mathew, Bedrich Benes, and Daniel Aliaga
A Fusion Crowd Simulation Method: Integrating Data with Dynamics, Personality with Common
Tianlu Mao, Ji Wang, Ruoyu Meng, Qinyuan Yan, Shaohua Liu, and Zhaoqi Wang
Cognitive Model of Agent Exploration with Vision and Signage Understanding
Colin Johnson and Brandon Haworth
Motion II
Pose Representations for Deep Skeletal Animation
Nefeli Andreou, Andreas Aristidou, and Yiorgos Chrysanthou
Generating Upper-Body Motion for Real-Time Characters Making their Way through Dynamic Environments
Eduardo Alvarado, Damien Rohmer, and Marie-Paule Cani
Neural3Points: Learning to Generate Physically Realistic Full-body Motion for Virtual Reality Users
Yongjing Ye, Libin Liu, Lei Hu, and Shihong Xia
UnderPressure: Deep Learning for Foot Contact Detection, Ground Reaction Force Estimation and Footskate Cleanup
Lucas Mourot, Ludovic Hoyet, François Le Clerc, and Pierre Hellier
Synthesizing Get-Up Motions for Physics-based Characters
Anthony Frezzato, Arsh Tangri, and Sheldon Andrews
Capture, Tracking, and Facial Animation
Local Scale Adaptation to Hand Shape Model for Accurate and Robust Hand Tracking
Pratik Kalshetti and Parag Chaudhuri
Tiled Characteristic Maps for Tracking Detailed Liquid Surfaces
Fumiya Narita and Ryoichi Ando
Monocular Facial Performance Capture Via Deep Expression Matching
Stephen W. Bailey, Jérémy Riviere, Morten Mikkelsen, and James F. O'Brien
Voice2Face: Audio-driven Facial and Tongue Rig Animations with cVAEs
Monica Villanueva Aylagas, Hector Anadon Leon, Mattias Teye, and Konrad Tollmar
Facial Animation with Disentangled Identity and Motion using Transformers
Prashanth Chandran, Gaspard Zoss, Markus Gross, Paulo Gotardo, and Derek Bradley
Detailed Eye Region Capture and Animation
Glenn Kerbiriou, Maud Marchal, and Quentin Avril
Learning
Learning Physics with a Hierarchical Graph Network
Nuttapong Chentanez, Stefan Jeschke, Matthias Müller, and Miles Macklin
PERGAMO: Personalized 3D Garments from Monocular Video
Andrés Casado-Elvira, Marc Comino Trinidad, and Dan Casas
Context-based Style Transfer of Tokenized Gestures
Shigeru Kuriyama, Tomohiko Mukai, Takafumi Taketomi, and Tomoyuki Mukasa
MP-NeRF: Neural Radiance Fields for Dynamic Multi-person synthesis from Sparse Views
Xianjin Chao and Howard Leung
Interaction Mix and Match: Synthesizing Close Interaction using Conditional Hierarchical GAN with Multi-Hot Class Embedding
Aman Goel, Qianhui Men, and Edmond S. L. Ho

BibTeX (41-Issue 8)
                
@article{
10.1111:cgf.14618,
journal = {Computer Graphics Forum}, title = {{
Physically Based Shape Matching}},
author = {
Müller, Matthias
 and
Macklin, Miles
 and
Chentanez, Nuttapong
 and
Jeschke, Stefan
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14618}
}
                
@article{
10.1111:cgf.14619,
journal = {Computer Graphics Forum}, title = {{
Fast Numerical Coarsening with Local Factorizations}},
author = {
He, Zhongyun
 and
Pérez, Jesús
 and
Otaduy, Miguel A.
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14619}
}
                
@article{
10.1111:cgf.14620,
journal = {Computer Graphics Forum}, title = {{
Stability Analysis of Explicit MPM}},
author = {
Bai, Song
 and
Schroeder, Craig
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14620}
}
                
@article{
10.1111:cgf.14621,
journal = {Computer Graphics Forum}, title = {{
Wassersplines for Neural Vector Field-Controlled Animation}},
author = {
Zhang, Paul
 and
Smirnov, Dmitriy
 and
Solomon, Justin
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14621}
}
                
@article{
10.1111:cgf.14622,
journal = {Computer Graphics Forum}, title = {{
Voronoi Filters for Simulation Enrichment}},
author = {
Casafranca, Juan J.
 and
Otaduy, Miguel A.
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14622}
}
                
@article{
10.1111:cgf.14623,
journal = {Computer Graphics Forum}, title = {{
Differentiable Simulation for Outcome-Driven Orthognathic Surgery Planning}},
author = {
Dorda, Daniel
 and
Peter, Daniel
 and
Borer, Dominik
 and
Huber, Niko Benjamin
 and
Sailer, Irena
 and
Gross, Markus
 and
Solenthaler, Barbara
 and
Thomaszewski, Bernhard
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14623}
}
                
@article{
10.1111:cgf.14624,
journal = {Computer Graphics Forum}, title = {{
High-Order Elasticity Interpolants for Microstructure Simulation}},
author = {
Chan-Lock, Antoine
 and
Pérez, Jesús
 and
Otaduy, Miguel A.
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14624}
}
                
@article{
10.1111:cgf.14625,
journal = {Computer Graphics Forum}, title = {{
Surface-Only Dynamic Deformables using a Boundary Element Method}},
author = {
Sugimoto, Ryusuke
 and
Batty, Christopher
 and
Hachisuka, Toshiya
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14625}
}
                
@article{
10.1111:cgf.14626,
journal = {Computer Graphics Forum}, title = {{
A Second Order Cone Programming Approach for Simulating Biphasic Materials}},
author = {
Tang, Pengbin
 and
Coros, Stelian
 and
Thomaszewski, Bernhard
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14626}
}
                
@article{
10.1111:cgf.14627,
journal = {Computer Graphics Forum}, title = {{
A Second-Order Explicit Pressure Projection Method for Eulerian Fluid Simulation}},
author = {
Jiang, Junwei
 and
Shen, Xiangda
 and
Gong, Yuning
 and
Fan, Zeng
 and
Liu, Yanli
 and
Xing, Guanyu
 and
Ren, Xiaohua
 and
Zhang, Yanci
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14627}
}
                
@article{
10.1111:cgf.14628,
journal = {Computer Graphics Forum}, title = {{
Combining Motion Matching and Orientation Prediction to Animate Avatars for Consumer-Grade VR Devices}},
author = {
Ponton, Jose Luis
 and
Yun, Haoran
 and
Andujar, Carlos
 and
Pelechano, Nuria
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14628}
}
                
@article{
10.1111:cgf.14629,
journal = {Computer Graphics Forum}, title = {{
Sketching Vocabulary for Crowd Motion}},
author = {
Mathew, C. D. Tharindu
 and
Benes, Bedrich
 and
Aliaga, Daniel
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14629}
}
                
@article{
10.1111:cgf.14630,
journal = {Computer Graphics Forum}, title = {{
A Fusion Crowd Simulation Method: Integrating Data with Dynamics, Personality with Common}},
author = {
Mao, Tianlu
 and
Wang, Ji
 and
Meng, Ruoyu
 and
Yan, Qinyuan
 and
Liu, Shaohua
 and
Wang, Zhaoqi
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14630}
}
                
@article{
10.1111:cgf.14631,
journal = {Computer Graphics Forum}, title = {{
Cognitive Model of Agent Exploration with Vision and Signage Understanding}},
author = {
Johnson, Colin
 and
Haworth, Brandon
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14631}
}
                
@article{
10.1111:cgf.14632,
journal = {Computer Graphics Forum}, title = {{
Pose Representations for Deep Skeletal Animation}},
author = {
Andreou, Nefeli
 and
Aristidou, Andreas
 and
Chrysanthou, Yiorgos
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14632}
}
                
@article{
10.1111:cgf.14633,
journal = {Computer Graphics Forum}, title = {{
Generating Upper-Body Motion for Real-Time Characters Making their Way through Dynamic Environments}},
author = {
Alvarado, Eduardo
 and
Rohmer, Damien
 and
Cani, Marie-Paule
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14633}
}
                
@article{
10.1111:cgf.14634,
journal = {Computer Graphics Forum}, title = {{
Neural3Points: Learning to Generate Physically Realistic Full-body Motion for Virtual Reality Users}},
author = {
Ye, Yongjing
 and
Liu, Libin
 and
Hu, Lei
 and
Xia, Shihong
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14634}
}
                
@article{
10.1111:cgf.14635,
journal = {Computer Graphics Forum}, title = {{
UnderPressure: Deep Learning for Foot Contact Detection, Ground Reaction Force Estimation and Footskate Cleanup}},
author = {
Mourot, Lucas
 and
Hoyet, Ludovic
 and
Clerc, François Le
 and
Hellier, Pierre
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14635}
}
                
@article{
10.1111:cgf.14637,
journal = {Computer Graphics Forum}, title = {{
Local Scale Adaptation to Hand Shape Model for Accurate and Robust Hand Tracking}},
author = {
Kalshetti, Pratik
 and
Chaudhuri, Parag
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14637}
}
                
@article{
10.1111:cgf.14636,
journal = {Computer Graphics Forum}, title = {{
Synthesizing Get-Up Motions for Physics-based Characters}},
author = {
Frezzato, Anthony
 and
Tangri, Arsh
 and
Andrews, Sheldon
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14636}
}
                
@article{
10.1111:cgf.14638,
journal = {Computer Graphics Forum}, title = {{
Tiled Characteristic Maps for Tracking Detailed Liquid Surfaces}},
author = {
Narita, Fumiya
 and
Ando, Ryoichi
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14638}
}
                
@article{
10.1111:cgf.14639,
journal = {Computer Graphics Forum}, title = {{
Monocular Facial Performance Capture Via Deep Expression Matching}},
author = {
Bailey, Stephen W.
 and
Riviere, Jérémy
 and
Mikkelsen, Morten
 and
O'Brien, James F.
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14639}
}
                
@article{
10.1111:cgf.14640,
journal = {Computer Graphics Forum}, title = {{
Voice2Face: Audio-driven Facial and Tongue Rig Animations with cVAEs}},
author = {
Villanueva Aylagas, Monica
 and
Anadon Leon, Hector
 and
Teye, Mattias
 and
Tollmar, Konrad
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14640}
}
                
@article{
10.1111:cgf.14641,
journal = {Computer Graphics Forum}, title = {{
Facial Animation with Disentangled Identity and Motion using Transformers}},
author = {
Chandran, Prashanth
 and
Zoss, Gaspard
 and
Gross, Markus
 and
Gotardo, Paulo
 and
Bradley, Derek
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14641}
}
                
@article{
10.1111:cgf.14642,
journal = {Computer Graphics Forum}, title = {{
Detailed Eye Region Capture and Animation}},
author = {
Kerbiriou, Glenn
 and
Marchal, Maud
 and
Avril, Quentin
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14642}
}
                
@article{
10.1111:cgf.14643,
journal = {Computer Graphics Forum}, title = {{
Learning Physics with a Hierarchical Graph Network}},
author = {
Chentanez, Nuttapong
 and
Jeschke, Stefan
 and
Müller, Matthias
 and
Macklin, Miles
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14643}
}
                
@article{
10.1111:cgf.14644,
journal = {Computer Graphics Forum}, title = {{
PERGAMO: Personalized 3D Garments from Monocular Video}},
author = {
Casado-Elvira, Andrés
 and
Comino Trinidad, Marc
 and
Casas, Dan
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14644}
}
                
@article{
10.1111:cgf.14645,
journal = {Computer Graphics Forum}, title = {{
Context-based Style Transfer of Tokenized Gestures}},
author = {
Kuriyama, Shigeru
 and
Mukai, Tomohiko
 and
Taketomi, Takafumi
 and
Mukasa, Tomoyuki
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14645}
}
                
@article{
10.1111:cgf.14646,
journal = {Computer Graphics Forum}, title = {{
MP-NeRF: Neural Radiance Fields for Dynamic Multi-person synthesis from Sparse Views}},
author = {
Chao, Xian Jin
 and
Leung, Howard
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14646}
}
                
@article{
10.1111:cgf.14647,
journal = {Computer Graphics Forum}, title = {{
Interaction Mix and Match: Synthesizing Close Interaction using Conditional Hierarchical GAN with Multi-Hot Class Embedding}},
author = {
Goel, Aman
 and
Men, Qianhui
 and
Ho, Edmond S. L.
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14647}
}
                
@article{
10.1111:cgf.14648,
journal = {Computer Graphics Forum}, title = {{
SCA 2022 CGF 41-8: Frontmatter}},
author = {
Dominik L. Michels
 and
Soeren Pirk
}, year = {
2022},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.14648}
}

Browse

Recent Submissions

Now showing 1 - 31 of 31
  • Item
    Physically Based Shape Matching
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Müller, Matthias; Macklin, Miles; Chentanez, Nuttapong; Jeschke, Stefan; Dominik L. Michels; Soeren Pirk
    The shape matching method is a popular approach to simulate deformable objects in interactive applications due to its stability and simplicity. An important feature is that there is no need for a mesh since the method works on arbitrary local groups within a set of particles. A major drawback of shape matching is the fact that it is geometrically motivated and not derived from physical principles which makes calibration difficult. The fact that the method does not conserve volume can yield visual artifacts, e.g. when a tire is compressed but does not bulge. In this paper we present a new meshless simulation method that is related to shape matching but derived from continuous constitutive models. Volume conservation and stiffness can be specified with physical parameters. Further, if the elements of a tetrahedral mesh are used as groups, our method perfectly reproduces FEM based simulations.
  • Item
    Fast Numerical Coarsening with Local Factorizations
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) He, Zhongyun; Pérez, Jesús; Otaduy, Miguel A.; Dominik L. Michels; Soeren Pirk
    Numerical coarsening methods offer an attractive methodology for fast simulation of objects with high-resolution heterogeneity. However, they rely heavily on preprocessing, and are not suitable when objects undergo dynamic material or topology updates. We present methods that largely accelerate the two main processes of numerical coarsening, namely training data generation and the optimization of coarsening shape functions, and as a result we manage to leverage runtime numerical coarsening under local material updates. To accelerate the generation of training data, we propose a domain-decomposition solver based on substructuring that leverages local factorizations. To accelerate the computation of coarsening shape functions, we propose a decoupled optimization of smoothness and data fitting. We evaluate quantitatively the accuracy and performance of our proposed methods, and we show that they achieve accuracy comparable to the baseline, albeit with speed-ups of orders of magnitude. We also demonstrate our methods on example simulations with local material and topology updates.
  • Item
    Stability Analysis of Explicit MPM
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Bai, Song; Schroeder, Craig; Dominik L. Michels; Soeren Pirk
    In this paper we analyze the stability of the explicit material point method (MPM). We focus on PIC, APIC, and CPIC transfers using quadratic and cubic splines in two and three dimensions. We perform a fully three-dimensional Von Neumann stability analysis to study the behavior within the bulk of a material. This reveals the relationship between the sound speed, CFL number, and actual time step restriction and its dependence on discretization options. We note that boundaries are generally less stable than the interior, with stable time steps generally decreasing until the limit when particles become isolated. We then analyze the stability of a single particle to derive a novel time step restriction that stabilizes simulations at their boundaries. Finally, we show that for explicit MPM with APIC or CPIC transfers, there are pathological cases where growth is observed at arbitrarily small time steps sizes. While these cases do not necessarily pose a problem for practical usage, they do suggest that a guarantee of stability may be theoretically impossible and that necessary but not sufficient time step restrictions may be a necessary and practical compromise.
  • Item
    Wassersplines for Neural Vector Field-Controlled Animation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Zhang, Paul; Smirnov, Dmitriy; Solomon, Justin; Dominik L. Michels; Soeren Pirk
    Much of computer-generated animation is created by manipulating meshes with rigs. While this approach works well for animating articulated objects like animals, it has limited flexibility for animating less structured free-form objects. We introduce Wassersplines, a novel trajectory inference method for animating unstructured densities based on recent advances in continuous normalizing flows and optimal transport. The key idea is to train a neurally-parameterized velocity field that represents the motion between keyframes. Trajectories are then computed by advecting keyframes through the velocity field. We solve an additional Wasserstein barycenter interpolation problem to guarantee strict adherence to keyframes. Our tool can stylize trajectories through a variety of PDE-based regularizers to create different visual effects. We demonstrate our tool on various keyframe interpolation problems to produce temporally-coherent animations without meshing or rigging.
  • Item
    Voronoi Filters for Simulation Enrichment
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Casafranca, Juan J.; Otaduy, Miguel A.; Dominik L. Michels; Soeren Pirk
    The simulation of complex deformation problems often requires enrichment techniques that introduce local high-resolution detail on a generally coarse discretization. The use cases include spatial or temporal refinement of the discretization, the simulation of composite materials with phenomena occurring at different scales, or even codimensional simulation. We present an efficient simulation enrichment method for both local refinement of the discretization and codimensional effects. We dub our method Voronoi filters, as it combines two key computational elements. One is the use of kinematic filters to constrain coarse and fine deformations, and thus provide enrichment functions that are complementary to the coarse deformation. The other one is the use of a centroidal Voronoi discretization for the design of the enrichment functions, which adds high-resolution detail in a compact manner while preserving the rigid modes of coarse deformation. We demonstrate our method on simulation examples of composite materials, hybrid triangle-based and yarn-level simulation of cloth, or enrichment of flesh simulation with high-resolution detail.
  • Item
    Differentiable Simulation for Outcome-Driven Orthognathic Surgery Planning
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Dorda, Daniel; Peter, Daniel; Borer, Dominik; Huber, Niko Benjamin; Sailer, Irena; Gross, Markus; Solenthaler, Barbara; Thomaszewski, Bernhard; Dominik L. Michels; Soeren Pirk
    Algorithms at the intersection of computer graphics and medicine have recently gained renewed attention. A particular interest are methods for virtual surgery planning (VSP), where treatment parameters must be carefully chosen to achieve a desired treatment outcome. FEM simulators can verify the treatment parameters by comparing a predicted outcome to the desired one. However, estimating the optimal parameters amounts to solving a challenging inverse problem. In current clinical practice it is solved manually by surgeons, who rely on their experience and intuition to iteratively refine the parameters, verifying them with simulated predictions. We prototype a differentiable FEM simulator and explore how it can enhance and simplify treatment planning, which is ultimately necessary to integrate simulation-based VSP tools into a clinical workflow. Specifically, we define a parametric treatment model based on surgeon input, and with analytically derived simulation gradients we optimise it against an objective defined on the visible facial 3D surface. By using sensitivity analysis, we can easily explore the solution-space with first-order approximations, which allow the surgeon to interactively visualise the effect of parameter variations on a given treatment plan. The objective function allows landmarks to be freely chosen, accommodating the multiple methodologies in clinical planning. We show that even with a very sparse set of guiding landmarks, our simulator robustly converges to a feasible post-treatment shape.
  • Item
    High-Order Elasticity Interpolants for Microstructure Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Chan-Lock, Antoine; Pérez, Jesús; Otaduy, Miguel A.; Dominik L. Michels; Soeren Pirk
    We propose a novel formulation of elastic materials based on high-order interpolants, which fits accurately complex elastic behaviors, but remains conservative. The proposed high-order interpolants can be regarded as a high-dimensional extension of radial basis functions, and they allow the interpolation of derivatives of elastic energy, in particular stress and stiffness. Given the proposed parameterization of elasticity models, we devise an algorithm to find optimal model parameters based on training data. We have tested our methodology for the homogenization of 2D microstructures, and we show that it succeeds to match complex behaviors with high accuracy.
  • Item
    Surface-Only Dynamic Deformables using a Boundary Element Method
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Sugimoto, Ryusuke; Batty, Christopher; Hachisuka, Toshiya; Dominik L. Michels; Soeren Pirk
    We propose a novel surface-only method for simulating dynamic deformables without the need for volumetric meshing or volumetric integral evaluations. While based upon a boundary element method (BEM) for linear elastodynamics, our method goes beyond simple adoption of BEM by addressing several of its key limitations. We alleviate large displacement artifacts due to linear elasticity by extending BEM with a moving reference frame and surface-only fictitious forces, so that it only needs to handle deformations. To reduce memory and computational costs, we present a simple and practical method to compress the series of dense matrices required to simulate propagation of elastic waves over time. Furthermore, we explore a constraint enforcement mechanism and demonstrate the applicability of our method to general computer animation problems, such as frictional contact.
  • Item
    A Second Order Cone Programming Approach for Simulating Biphasic Materials
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Tang, Pengbin; Coros, Stelian; Thomaszewski, Bernhard; Dominik L. Michels; Soeren Pirk
    Strain limiting is a widely used approach for simulating biphasic materials such as woven textiles and biological tissue that exhibit a soft elastic regime followed by a hard deformation limit. However, existing methods are either based on slowly converging local iterations, or offer no guarantees on convergence. In this work, we propose a new approach to strain limiting based on second order cone programming (SOCP). Our work is based on the key insight that upper bounds on per-triangle deformations lead to convex quadratic inequality constraints. Though nonlinear, these constraints can be reformulated as inclusion conditions on convex sets, leading to a second order cone programming problem-a convex optimization problem that a) is guaranteed to have a unique solution and b) allows us to leverage efficient conic programming solvers. We first cast strain limiting with anisotropic bounds on stretching as a quadratically constrained quadratic program (QCQP), then show how this QCQP can be mapped to a second order cone programming problem. We further propose a constraint reflection scheme and empirically show that it exhibits superior energy-preservation properties compared to conventional end-of-step projection methods. Finally, we demonstrate our prototype implementation on a set of examples and illustrate how different deformation limits can be used to model a wide range of material behaviors.
  • Item
    A Second-Order Explicit Pressure Projection Method for Eulerian Fluid Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Jiang, Junwei; Shen, Xiangda; Gong, Yuning; Fan, Zeng; Liu, Yanli; Xing, Guanyu; Ren, Xiaohua; Zhang, Yanci; Dominik L. Michels; Soeren Pirk
    In this paper, we propose a novel second-order explicit midpoint method to address the issue of energy loss and vorticity dissipation in Eulerian fluid simulation. The basic idea is to explicitly compute the pressure gradient at the middle time of each time step and apply it to the velocity field after advection. Theoretically, our solver can achieve higher accuracy than the first-order solvers at similar computational cost. On the other hand, our method is twice and even faster than the implicit second-order solvers at the cost of a small loss of accuracy. We have carried out a large number of 2D, 3D and numerical experiments to verify the effectiveness and availability of our algorithm.
  • Item
    Combining Motion Matching and Orientation Prediction to Animate Avatars for Consumer-Grade VR Devices
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Ponton, Jose Luis; Yun, Haoran; Andujar, Carlos; Pelechano, Nuria; Dominik L. Michels; Soeren Pirk
    The animation of user avatars plays a crucial role in conveying their pose, gestures, and relative distances to virtual objects or other users. Self-avatar animation in immersive VR helps improve the user experience and provides a Sense of Embodiment. However, consumer-grade VR devices typically include at most three trackers, one at the Head Mounted Display (HMD), and two at the handheld VR controllers. Since the problem of reconstructing the user pose from such sparse data is ill-defined, especially for the lower body, the approach adopted by most VR games consists of assuming the body orientation matches that of the HMD, and applying animation blending and time-warping from a reduced set of animations. Unfortunately, this approach produces noticeable mismatches between user and avatar movements. In this work we present a new approach to animate user avatars that is suitable for current mainstream VR devices. First, we use a neural network to estimate the user's body orientation based on the tracking information from the HMD and the hand controllers. Then we use this orientation together with the velocity and rotation of the HMD to build a feature vector that feeds a Motion Matching algorithm. We built a MoCap database with animations of VR users wearing a HMD and used it to test our approach on both self-avatars and other users' avatars. Our results show that our system can provide a large variety of lower body animations while correctly matching the user orientation, which in turn allows us to represent not only forward movements but also stepping in any direction.
  • Item
    Sketching Vocabulary for Crowd Motion
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Mathew, C. D. Tharindu; Benes, Bedrich; Aliaga, Daniel; Dominik L. Michels; Soeren Pirk
    This paper proposes and evaluates a sketching language to author crowd motion. It focuses on the path, speed, thickness, and density parameters of crowd motion. A sketch-based vocabulary is proposed for each parameter and evaluated in a user study against complex crowd scenes. A sketch recognition pipeline converts the sketches into a crowd simulation. The user study results show that 1) participants at various skill levels and can draw accurate crowd motion through sketching, 2) certain sketch styles lead to a more accurate representation of crowd parameters, and 3) sketching allows to produce complex crowd motions in a few seconds. The results show that some styles although accurate actually are less preferred over less accurate ones.
  • Item
    A Fusion Crowd Simulation Method: Integrating Data with Dynamics, Personality with Common
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Mao, Tianlu; Wang, Ji; Meng, Ruoyu; Yan, Qinyuan; Liu, Shaohua; Wang, Zhaoqi; Dominik L. Michels; Soeren Pirk
    This paper proposes a novel crowd simulation method which integrates not only modelling ideas but also advantages from both data-driven methods and crowd dynamics methods. To seamlessly integrate these two different modelling ideas, first, a fusion crowd motion model is developed. In this model the motion of crowd are driven dynamically by different forces. Part of the forces are modeled under a universal interaction mechanism, which describe the common parts of crowd dynamics. Others are modeled by examples from real data, which describe the personality parts of the agent motion. Second, a construction method for example dataset is proposed to support the fusion model. In the dataset, crowd trajectories captured in the real world are decomposed and re-described under the structure of the fusion model. Thus, personality parts hidden in the real data could be locked and extracted, making the data understandable and migratable for our fusion model. A comprehensive crowd motion generation workflow using the fusion model and example dataset is also proposed. Quantitative and qualitative experiments and user studies are conducted. Results show that the proposed fusion crowd simulation method can generate crowd motion with the great motion fidelity, which not only match the macro characteristics of real data, but also has lots of micro personality showing the diversity of crowd motion.
  • Item
    Cognitive Model of Agent Exploration with Vision and Signage Understanding
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Johnson, Colin; Haworth, Brandon; Dominik L. Michels; Soeren Pirk
    Signage systems play an essential role in ensuring safe, stress-free, and efficient navigation for the occupants of indoor spaces. Crowd simulations with sufficiently realistic virtual humans provide a convenient and cost-effective approach to evaluating and optimizing signage systems. In this work, we develop an agent model which makes use of image processing on parametric saliency maps to visually identify signage and distractions in the agent's field of view. Information from identified signs is incorporated into a grid-based representation of wayfinding familiarity, which is used to guide informed exploration of the agent's environment using a modified A* algorithm. In areas with low wayfinding familiarity, the agent follows a random exploration behaviour based on sampling a grid of previously observed locations for heuristic values based on space syntax isovist measures. The resulting agent design is evaluated in a variety of test environments and found to be able to reliably navigate towards a goal location using a combination of signage and random exploration.
  • Item
    Pose Representations for Deep Skeletal Animation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Andreou, Nefeli; Aristidou, Andreas; Chrysanthou, Yiorgos; Dominik L. Michels; Soeren Pirk
    Data-driven skeletal animation relies on the existence of a suitable learning scheme, which can capture the rich context of motion. However, commonly used motion representations often fail to accurately encode the full articulation of motion, or present artifacts. In this work, we address the fundamental problem of finding a robust pose representation for motion, suitable for deep skeletal animation, one that can better constrain poses and faithfully capture nuances correlated with skeletal characteristics. Our representation is based on dual quaternions, the mathematical abstractions with well-defined operations, which simultaneously encode rotational and positional orientation, enabling a rich encoding, centered around the root. We demonstrate that our representation overcomes common motion artifacts, and assess its performance compared to other popular representations. We conduct an ablation study to evaluate the impact of various losses that can be incorporated during learning. Leveraging the fact that our representation implicitly encodes skeletal motion attributes, we train a network on a dataset comprising of skeletons with different proportions, without the need to retarget them first to a universal skeleton, which causes subtle motion elements to be missed. Qualitative results demonstrate the usefulness of the parameterization in skeleton-specific synthesis.
  • Item
    Generating Upper-Body Motion for Real-Time Characters Making their Way through Dynamic Environments
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Alvarado, Eduardo; Rohmer, Damien; Cani, Marie-Paule; Dominik L. Michels; Soeren Pirk
    Real-time character animation in dynamic environments requires the generation of plausible upper-body movements regardless of the nature of the environment, including non-rigid obstacles such as vegetation. We propose a flexible model for upper-body interactions, based on the anticipation of the character's surroundings, and on antagonistic controllers to adapt the amount of muscular stiffness and response time to better deal with obstacles. Our solution relies on a hybrid method for character animation that couples a keyframe sequence with kinematic constraints and lightweight physics. The dynamic response of the character's upper-limbs leverages antagonistic controllers, allowing us to tune tension/relaxation in the upper-body without diverging from the reference keyframe motion. A new sight model, controlled by procedural rules, enables high-level authoring of the way the character generates interactions by adapting its stiffness and reaction time. As results show, our real-time method offers precise and explicit control over the character's behavior and style, while seamlessly adapting to new situations. Our model is therefore well suited for gaming applications.
  • Item
    Neural3Points: Learning to Generate Physically Realistic Full-body Motion for Virtual Reality Users
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Ye, Yongjing; Liu, Libin; Hu, Lei; Xia, Shihong; Dominik L. Michels; Soeren Pirk
    Animating an avatar that reflects a user's action in the VR world enables natural interactions with the virtual environment. It has the potential to allow remote users to communicate and collaborate in a way as if they met in person. However, a typical VR system provides only a very sparse set of up to three positional sensors, including a head-mounted display (HMD) and optionally two hand-held controllers, making the estimation of the user's full-body movement a difficult problem. In this work, we present a data-driven physics-based method for predicting the realistic full-body movement of the user according to the transformations of these VR trackers and simulating an avatar character to mimic such user actions in the virtual world in realtime. We train our system using reinforcement learning with carefully designed pretraining processes to ensure the success of the training and the quality of the simulation. We demonstrate the effectiveness of the method with an extensive set of examples.
  • Item
    UnderPressure: Deep Learning for Foot Contact Detection, Ground Reaction Force Estimation and Footskate Cleanup
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Mourot, Lucas; Hoyet, Ludovic; Clerc, François Le; Hellier, Pierre; Dominik L. Michels; Soeren Pirk
    Human motion synthesis and editing are essential to many applications like video games, virtual reality, and film postproduction. However, they often introduce artefacts in motion capture data, which can be detrimental to the perceived realism. In particular, footskating is a frequent and disturbing artefact, which requires knowledge of foot contacts to be cleaned up. Current approaches to obtain foot contact labels rely either on unreliable threshold-based heuristics or on tedious manual annotation. In this article, we address automatic foot contact label detection from motion capture data with a deep learning based method. To this end, we first publicly release UNDERPRESSURE, a novel motion capture database labelled with pressure insoles data serving as reliable knowledge of foot contact with the ground. Then, we design and train a deep neural network to estimate ground reaction forces exerted on the feet from motion data and then derive accurate foot contact labels. The evaluation of our model shows that we significantly outperform heuristic approaches based on height and velocity thresholds and that our approach is much more robust when applied on motion sequences suffering from perturbations like noise or footskate. We further propose a fully automatic workflow for footskate cleanup: foot contact labels are first derived from estimated ground reaction forces. Then, footskate is removed by solving foot constraints through an optimisation-based inverse kinematics (IK) approach that ensures consistency with the estimated ground reaction forces. Beyond footskate cleanup, both the database and the method we propose could help to improve many approaches based on foot contact labels or ground reaction forces, including inverse dynamics problems like motion reconstruction and learning of deep motion models in motion synthesis or character animation. Our implementation, pre-trained model as well as links to database can be found at github.com/InterDigitalInc/UnderPressure.
  • Item
    Local Scale Adaptation to Hand Shape Model for Accurate and Robust Hand Tracking
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Kalshetti, Pratik; Chaudhuri, Parag; Dominik L. Michels; Soeren Pirk
    The accuracy of hand tracking algorithms depends on how closely the geometry of the mesh model resembles the user's hand shape. Most existing methods rely on a learned shape space model; however, this fails to generalize to unseen hand shapes with significant deviations from the training set. We introduce local scale adaptation to augment this data-driven shape model and thus enable modeling hands of substantially different sizes. We also present a framework to calibrate our proposed hand shape model by registering it to depth data and achieve accurate and robust tracking. We demonstrate the capability of our proposed adaptive shape model over the most widely used existing hand model by registering it to subjects from different demographics. We also validate the accuracy and robustness of our tracking framework on challenging public hand datasets where we improve over state-of-the-art methods. Our adaptive hand shape model and tracking framework offer a significant boost towards generalizing the accuracy of hand tracking.
  • Item
    Synthesizing Get-Up Motions for Physics-based Characters
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Frezzato, Anthony; Tangri, Arsh; Andrews, Sheldon; Dominik L. Michels; Soeren Pirk
    We propose a method for synthesizing get-up motions for physics-based humanoid characters. Beginning from a supine or prone state, our objective is not to imitate individual motion clips, but to produce motions that match input curves describing the style of get-up motion. Our framework uses deep reinforcement learning to learn control policies for the physics-based character. A latent embedding of natural human poses is computed from a motion capture database, and the embedding is furthermore conditioned on the input features. We demonstrate that our approach can synthesize motions that follow the style of user authored curves, as well as curves extracted from reference motions. In the latter case, motions of the physics-based character resemble the original motion clips. New motions can be synthesized easily by changing only a small number of controllable parameters. We also demonstrate the success of our controllers on rough and inclined terrain.
  • Item
    Tiled Characteristic Maps for Tracking Detailed Liquid Surfaces
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Narita, Fumiya; Ando, Ryoichi; Dominik L. Michels; Soeren Pirk
    We introduce tiled characteristic maps for level set method that accurately preserves both thin sheets and sharp edges over a long period of time. Instead of resorting to high-order differential schemes, we utilize the characteristics mapping method to minimize numerical diffusion induced by advection. We find that although a single characteristic map could be used to better preserve detailed geometry, it suffers from frequent global re-initialization due to the strong distortions that are locally generated. We show that when multiple localized tiled characteristic maps are used, this limitation is constrained only within tiles; enabling long-term preservation of detailed structures where little distortion is observed. When applied to liquid simulation, we demonstrate that at a reasonably amount of added computational cost, our method retains small-scale high-fidelity (e.g., splashes and waves) that is quickly smeared out or deleted with purely grid-based or particle level set methods.
  • Item
    Monocular Facial Performance Capture Via Deep Expression Matching
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Bailey, Stephen W.; Riviere, Jérémy; Mikkelsen, Morten; O'Brien, James F.; Dominik L. Michels; Soeren Pirk
    Facial performance capture is the process of automatically animating a digital face according to a captured performance of an actor. Recent developments in this area have focused on high-quality results using expensive head-scanning equipment and camera rigs. These methods produce impressive animations that accurately capture subtle details in an actor's performance. However, these methods are accessible only to content creators with relatively large budgets. Current methods using inexpensive recording equipment generally produce lower quality output that is unsuitable for many applications. In this paper, we present a facial performance capture method that does not require facial scans and instead animates an artist-created model using standard blendshapes. Furthermore, our method gives artists high-level control over animations through a workflow similar to existing commercial solutions. Given a recording, our approach matches keyframes of the video with corresponding expressions from an animated library of poses. A Gaussian process model then computes the full animation by interpolating from the set of matched keyframes. Our expression-matching method computes a low-dimensional latent code from an image that represents a facial expression while factoring out the facial identity. Images depicting similar facial expressions are identified by their proximity in the latent space. In our results, we demonstrate the fidelity of our expression-matching method. We also compare animations generated with our approach to animations generated with commercially available software.
  • Item
    Voice2Face: Audio-driven Facial and Tongue Rig Animations with cVAEs
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Villanueva Aylagas, Monica; Anadon Leon, Hector; Teye, Mattias; Tollmar, Konrad; Dominik L. Michels; Soeren Pirk
    We present Voice2Face: a Deep Learning model that generates face and tongue animations directly from recorded speech. Our approach consists of two steps: a conditional Variational Autoencoder generates mesh animations from speech, while a separate module maps the animations to rig controller space. Our contributions include an automated method for speech style control, a method to train a model with data from multiple quality levels, and a method for animating the tongue. Unlike previous works, our model generates animations without speaker-dependent characteristics while allowing speech style control. We demonstrate through a user study that Voice2Face significantly outperforms a comparative state-of-the-art model in terms of perceived animation quality, and our quantitative evaluation suggests that Voice2Face yields more accurate lip closure in speech with bilabials through our speech style optimization. Both evaluations also show that our data quality conditioning scheme outperforms both an unconditioned model and a model trained with a smaller high-quality dataset. Finally, the user study shows a preference for animations including tongue. Results from our model can be seen at https://go.ea.com/voice2face.
  • Item
    Facial Animation with Disentangled Identity and Motion using Transformers
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Chandran, Prashanth; Zoss, Gaspard; Gross, Markus; Gotardo, Paulo; Bradley, Derek; Dominik L. Michels; Soeren Pirk
    We propose a 3D+time framework for modeling dynamic sequences of 3D facial shapes, representing realistic non-rigid motion during a performance. Our work extends neural 3D morphable models by learning a motion manifold using a transformer architecture. More specifically, we derive a novel transformer-based autoencoder that can model and synthesize 3D geometry sequences of arbitrary length. This transformer naturally determines frame-to-frame correlations required to represent the motion manifold, via the internal self-attention mechanism. Furthermore, our method disentangles the constant facial identity from the time-varying facial expressions in a performance, using two separate codes to represent neutral identity and the performance itself within separate latent subspaces. Thus, the model represents identity-agnostic performances that can be paired with an arbitrary new identity code and fed through our new identity-modulated performance decoder; the result is a sequence of 3D meshes for the performance with the desired identity and temporal length. We demonstrate how our disentangled motion model has natural applications in performance synthesis, performance retargeting, key-frame interpolation and completion of missing data, performance denoising and retiming, and other potential applications that include full 3D body modeling.
  • Item
    Detailed Eye Region Capture and Animation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Kerbiriou, Glenn; Marchal, Maud; Avril, Quentin; Dominik L. Michels; Soeren Pirk
    Even if the appearance and geometry of the human eye have been extensively studied during the last decade, the geometrical correlation between gaze direction, eyelids aperture and eyelids shape has not been empirically modeled. In this paper, we propose a data-driven approach for capturing and modeling the subtle features of the human eye region, such as the inner eye corner and the skin bulging effect due to globe orientation. Our approach consists of an original experimental setup to capture the eye region geometry variations combined with a 3D reconstruction method. Regarding the eye region capture, we scanned 55 participants doing 36 eyes poses. To animate a participant's eye region, we register the different poses to a vertex wise correspondence before blending them in a trilinear fashion. We show that our 3D animation results are visually pleasant and realistic while bringing novel eye features compared to state of the art models.
  • Item
    Learning Physics with a Hierarchical Graph Network
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Chentanez, Nuttapong; Jeschke, Stefan; Müller, Matthias; Macklin, Miles; Dominik L. Michels; Soeren Pirk
    We propose a hierarchical graph for learning physics and a novel way to handle obstacles. The finest level of the graph consist of the particles itself. Coarser levels consist of the cells of sparse grids with successively doubling cell sizes covering the volume occupied by the particles. The hierarchical structure allows for the information to propagate at great distance in a single message passing iteration. The novel obstacle handling allows the simulation to be obstacle aware without the need for ghost particles. We train the network to predict effective acceleration produced by multiple sub-steps of 3D multi-material material point method (MPM) simulation consisting of water, sand and snow with complex obstacles. Our network produces lower error, trains up to 7.0X faster and inferences up to 11.3X faster than [SGGP*20]. It is also, on average, about 3.7X faster compared to Taichi Elements simulation running on the same hardware in our tests.
  • Item
    PERGAMO: Personalized 3D Garments from Monocular Video
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Casado-Elvira, Andrés; Comino Trinidad, Marc; Casas, Dan; Dominik L. Michels; Soeren Pirk
    Clothing plays a fundamental role in digital humans. Current approaches to animate 3D garments are mostly based on realistic physics simulation, however, they typically suffer from two main issues: high computational run-time cost, which hinders their deployment; and simulation-to-real gap, which impedes the synthesis of specific real-world cloth samples. To circumvent both issues we propose PERGAMO, a data-driven approach to learn a deformable model for 3D garments from monocular images. To this end, we first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos. We use these 3D reconstructions to train a regression model that accurately predicts how the garment deforms as a function of the underlying body pose. We show that our method is capable of producing garment animations that match the real-world behavior, and generalizes to unseen body motions extracted from motion capture dataset.
  • Item
    Context-based Style Transfer of Tokenized Gestures
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Kuriyama, Shigeru; Mukai, Tomohiko; Taketomi, Takafumi; Mukasa, Tomoyuki; Dominik L. Michels; Soeren Pirk
    Gestural animations in the amusement or entertainment field often require rich expressions; however, it is still challenging to synthesize characteristic gestures automatically. Although style transfer based on a neural network model is a potential solution, existing methods mainly focus on cyclic motions such as gaits and require re-training in adding new motion styles. Moreover, their per-pose transformation cannot consider the time-dependent features, and therefore motion styles of different periods and timings are difficult to be transferred. This limitation is fatal for the gestural motions requiring complicated time alignment due to the variety of exaggerated or intentionally performed behaviors. This study introduces a context-based style transfer of gestural motions with neural networks to ensure stable conversion even for exaggerated, dynamically complicated gestures. We present a model based on a vision transformer for transferring gestures' content and style features by time-segmenting them to compose tokens in a latent space. We extend this model to yield the probability of swapping gestures' tokens for style-transferring. A transformer model is suited to semantically consistent matching among gesture tokens, owing to the correlation with spoken words. The compact architecture of our network model requires only a small number of parameters and computational costs, which is suitable for real-time applications with an ordinary device. We introduce loss functions provided by the restoration error of identically and cyclically transferred gesture tokens and the similarity losses of content and style evaluated by splicing features inside the transformer. This design of losses allows unsupervised and zero-shot learning, by which the scalability for motion data is obtained. We comparatively evaluated our style transfer method, mainly focusing on expressive gestures using our dataset captured for various scenarios and styles by introducing new error metrics tailored for gestures. Our experiment showed the superiority of our method in numerical accuracy and stability of style transfer against the existing methods.
  • Item
    MP-NeRF: Neural Radiance Fields for Dynamic Multi-person synthesis from Sparse Views
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Chao, Xian Jin; Leung, Howard; Dominik L. Michels; Soeren Pirk
    Multi-person novel view synthesis aims to generate free-viewpoint videos for dynamic scenes of multiple persons. However, current methods require numerous views to reconstruct a dynamic person and only achieve good performance when only a single person is present in the video. This paper aims to reconstruct a multi-person scene with fewer views, especially addressing the occlusion and interaction problems that appear in the multi-person scene. We propose MP-NeRF, a practical method for multiperson novel view synthesis from sparse cameras without the pre-scanned template human models. We apply a multi-person SMPL template as the identity and human motion prior. Then we build a global latent code to integrate the relative observations among multiple people, so we could represent multiple dynamic people into multiple neural radiance representations from sparse views. Experiments on multi-person dataset MVMP show that our method is superior to other state-of-the-art methods.
  • Item
    Interaction Mix and Match: Synthesizing Close Interaction using Conditional Hierarchical GAN with Multi-Hot Class Embedding
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Goel, Aman; Men, Qianhui; Ho, Edmond S. L.; Dominik L. Michels; Soeren Pirk
    Synthesizing multi-character interactions is a challenging task due to the complex and varied interactions between the characters. In particular, precise spatiotemporal alignment between characters is required in generating close interactions such as dancing and fighting. Existing work in generating multi-character interactions focuses on generating a single type of reactive motion for a given sequence which results in a lack of variety of the resultant motions. In this paper, we propose a novel way to create realistic human reactive motions which are not presented in the given dataset by mixing and matching different types of close interactions. We propose a Conditional Hierarchical Generative Adversarial Network with Multi-Hot Class Embedding to generate the Mix and Match reactive motions of the follower from a given motion sequence of the leader. Experiments are conducted on both noisy (depth-based) and high-quality (MoCap-based) interaction datasets. The quantitative and qualitative results show that our approach outperforms the state-of-the-art methods on the given datasets. We also provide an augmented dataset with realistic reactive motions to stimulate future research in this area.
  • Item
    SCA 2022 CGF 41-8: Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Dominik L. Michels; Soeren Pirk; Dominik L. Michels; Soeren Pirk