36-Issue 8
Permanent URI for this collection
Browse
Browsing 36-Issue 8 by Subject "Animation"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Building a Large Database of Facial Movements for Deformation Model‐Based 3D Face Tracking(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Sibbing, Dominik; Kobbelt, Leif; Chen, Min and Zhang, Hao (Richard)We introduce a new markerless 3D face tracking approach for 2D videos captured by a single consumer grade camera. Our approach takes detected 2D facial features as input and matches them with projections of 3D features of a deformable model to determine its pose and shape. To make the tracking and reconstruction more robust we add a smoothness prior for pose and deformation changes of the faces. Our major contribution lies in the formulation of the deformation prior which we derive from a large database of facial animations showing different (dynamic) facial expressions of a fairly large number of subjects. We split these animation sequences into snippets of fixed length which we use to predict the facial motion based on previous frames. In order to keep the deformation model compact and independent from the individual physiognomy, we represent it by deformation gradients (instead of vertex positions) and apply a principal component analysis in deformation gradient space to extract the major modes of facial deformation. Since the facial deformation is optimized during tracking, it is particularly easy to apply them to other physiognomies and thereby re‐target the facial expressions. We demonstrate the effectiveness of our technique on a number of examples.We introduce a new markerless 3D face tracking approach for 2D videos captured by a single consumer grade camera. Our approach takes detected 2D facial features as input and matches them with projections of 3D features of a deformable model to determine its pose and shape. To make the tracking and reconstruction more robust we add a smoothness prior for pose and deformation changes of the faces. Our major contribution lies in the formulation of the deformation prior which we derive from a large database of facial animations showing different (dynamic) facial expressions of a fairly large number of subjects. We split these animation sequences into snippets of fixed length which we use to predict the facial motion based on previous frames.Item Detail‐Preserving Explicit Mesh Projection and Topology Matching for Particle‐Based Fluids(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Dagenais, F.; Gagnon, J.; Paquette, E.; Chen, Min and Zhang, Hao (Richard)We propose a new explicit surface tracking approach for particle‐based fluid simulations. Our goal is to advect and update a highly detailed surface, while only computing a coarse simulation. Current explicit surface methods lose surface details when projecting on the isosurface of an implicit function built from particles. Our approach uses a detail‐preserving projection, based on a signed distance field, to prevent the divergence of the explicit surface without losing its initial details. Furthermore, we introduce a novel topology matching stage that corrects the topology of the explicit surface based on the topology of an implicit function. To that end, we introduce an optimization approach to update our explicit mesh signed distance field before remeshing. Our approach is successfully used to preserve the surface details of melting and highly viscous objects, and shown to be stable by handling complex cases involving multiple topological changes. Compared to the computation of a high‐resolution simulation, using our approach with a coarse fluid simulation significantly reduces the computation time and improves the quality of the resulting surface.We propose a new explicit surface tracking approach for particle‐based fluid simulations. Our goal is to advect and update a highly detailed surface, while only computing a coarse simulation. Current explicit surface methods lose surface details when projecting on the isosurface of an implicit function built from particles. Our approach uses a detail‐preserving projection, based on a signed distance field, to prevent the divergence of the explicit surface without losing its initial details. Furthermore, we introduce a novel topology matching stage that corrects the topology of the explicit surface based on the topology of an implicit function. To that end, we introduce an optimization approach to update our explicit mesh signed distance field before remeshing.Item Multi‐Variate Gaussian‐Based Inverse Kinematics(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Huang, Jing; Wang, Qi; Fratarcangeli, Marco; Yan, Ke; Pelachaud, Catherine; Chen, Min and Zhang, Hao (Richard)Inverse kinematics (IK) equations are usually solved through approximated linearizations or heuristics. These methods lead to character animations that are unnatural looking or unstable because they do not consider both the motion coherence and limits of human joints. In this paper, we present a method based on the formulation of multi‐variate Gaussian distribution models (MGDMs), which precisely specify the soft joint constraints of a kinematic skeleton. Each distribution model is described by a covariance matrix and a mean vector representing both the joint limits and the coherence of motion of different limbs. The MGDMs are automatically learned from the motion capture data in a fast and unsupervised process. When the character is animated or posed, a Gaussian process synthesizes a new MGDM for each different vector of target positions, and the corresponding objective function is solved with Jacobian‐based IK. This makes our method practical to use and easy to insert into pre‐existing animation pipelines. Compared with previous works, our method is more stable and more precise, while also satisfying the anatomical constraints of human limbs. Our method leads to natural and realistic results without sacrificing real‐time performance.Inverse kinematics (IK) equations are usually solved through approximated linearizations or heuristics. These methods lead to character animations that are unnatural looking or unstable because they do not consider both the motion coherence and limits of human joints. In this paper, we present a method based on the formulation of multi‐variate Gaussian distribution models (MGDMs), which precisely specify the soft joint constraints of a kinematic skeleton. Each distribution model is described by a covariance matrix and a mean vector representing both the joint limits and the coherence of motion of different limbs.