36-Issue 2
Permanent URI for this collection
Browse
Browsing 36-Issue 2 by Subject "Animation"
Now showing 1 - 11 of 11
Results Per Page
Sort Options
Item Character-Object Interaction Retrieval Using the Interaction Bisector Surface(The Eurographics Association and John Wiley & Sons Ltd., 2017) Zhao, Xi; Choi, Myung Geol; Komura, Taku; Loic Barthe and Bedrich BenesIn this paper, we propose a novel approach for the classification and retrieval of interactions between human characters and objects. We propose to use the interaction bisector surface (IBS) between the body and the object as a feature of the interaction. We define a multi-resolution representation of the body structure, and compute a correspondence matrix hierarchy that describes which parts of the character's skeleton take part in the composition of the IBS and how much they contribute to the interaction. Key-frames of the interactions are extracted based on the evolution of the IBS and used to align the query interaction with the interaction in the database. Through the experimental results, we show that our approach outperforms existing techniques in motion classification and retrieval, which implies that the contextual information plays a significant role for scene and interaction description. Our method also shows better performance than other techniques that use features based on the spatial relations between the body parts, or the body parts and the object. Our method can be applied for character motion synthesis and robot motion planning.Item Geometric Stiffness for Real-time Constrained Multibody Dynamics(The Eurographics Association and John Wiley & Sons Ltd., 2017) Andrews, Sheldon; Teichmann, Marek; Kry, Paul G.; Loic Barthe and Bedrich BenesThis paper focuses on the stable and efficient simulation of articulated rigid body systems for real-time applications. Specifically, we focus on the use of geometric stiffness, which can dramatically increase simulation stability. We examine several numerical problems with the inclusion of geometric stiffness in the equations of motion, as proposed by previous work, and address these issues by introducing a novel method for efficiently building the linear system. This offers improved tractability and numerical efficiency. Furthermore, geometric stiffness tends to significantly dissipate kinetic energy. We propose an adaptive damping scheme, inspired by the geometric stiffness, that uses a stability criterion based on the numerical integrator to determine the amount of non-constitutive damping required to stabilize the simulation. With this approach, not only is the dynamical behavior better preserved, but the simulation remains stable for mass ratios of 1,000,000-to-1 at time steps up to 0.1 s. We present a number of challenging scenarios to demonstrate that our method improves efficiency, and that it increases stability by orders of magnitude compared to previous work.Item Gradient-based Steering for Vision-based Crowd Simulation Algorithms(The Eurographics Association and John Wiley & Sons Ltd., 2017) Dutra, Teofilo B.; Marques, Ricardo; Cavalcante-Neto, Joaquim Bento; Vidal, Creto A.; Pettré, Julien; Loic Barthe and Bedrich BenesMost recent crowd simulation algorithms equip agents with a synthetic vision component for steering. They offer promising perspectives through a more realistic simulation of the way humans navigate according to their perception of the surrounding environment. In this paper, we propose a new perception/motion loop to steering agents along collision free trajectories that significantly improves the quality of vision-based crowd simulators. In contrast with solutions where agents avoid collisions in a purely reactive (binary) way, we suggest exploring the full range of possible adaptations and retaining the locally optimal one. To this end, we introduce a cost function, based on perceptual variables, which estimates an agent's situation considering both the risks of future collision and a desired destination. We then compute the partial derivatives of that function with respect to all possible motion adaptations. The agent then adapts its motion by following the gradient. This paper has thus two main contributions: the definition of a general purpose control scheme for steering synthetic vision-based agents; and the proposition of cost functions for evaluating the perceived danger of the current situation. We demonstrate improvements in several cases.Item Interactive Paper Tearing(The Eurographics Association and John Wiley & Sons Ltd., 2017) Schreck, Camille; Rohmer, Damien; Hahmann, Stefanie; Loic Barthe and Bedrich BenesWe propose an efficient method to model paper tearing in the context of interactive modeling. The method uses geometrical information to automatically detect potential starting points of tears. We further introduce a new hybrid geometrical and physical-based method to compute the trajectory of tears while procedurally synthesizing high resolution details of the tearing path using a texture based approach. The results obtained are compared with real paper and with previous studies on the expected geometric paths of paper that tears.Item Makeup Lamps: Live Augmentation of Human Faces via Projection(The Eurographics Association and John Wiley & Sons Ltd., 2017) Bermano, Amit Haim; Billeter, Markus; Iwai, Daisuke; Grundhöfer, Anselm; Loic Barthe and Bedrich BenesWe propose the first system for live dynamic augmentation of human faces. Using projector-based illumination, we alter the appearance of human performers during novel performances. The key challenge of live augmentation is latency - an image is generated according to a specific pose, but is displayed on a different facial configuration by the time it is projected. Therefore, our system aims at reducing latency during every step of the process, from capture, through processing, to projection. Using infrared illumination, an optically and computationally aligned high-speed camera detects facial orientation as well as expression. The estimated expression blendshapes are mapped onto a lower dimensional space, and the facial motion and non-rigid deformation are estimated, smoothed and predicted through adaptive Kalman filtering. Finally, the desired appearance is generated interpolating precomputed offset textures according to time, global position, and expression. We have evaluated our system through an optimized CPU and GPU prototype, and demonstrated successful low latency augmentation for different performers and performances with varying facial play and motion speed. In contrast to existing methods, the presented system is the first method which fully supports dynamic facial projection mapping without the requirement of any physical tracking markers and incorporates facial expressions.Item Multi-View Stereo on Consistent Face Topology(The Eurographics Association and John Wiley & Sons Ltd., 2017) Fyffe, Graham; Nagano, Koki; Huynh, Loc; Saito, Shunsuke; Busch, Jay; Jones, Andrew; Li, Hao; Debevec, Paul; Loic Barthe and Bedrich BenesWe present a multi-view stereo reconstruction technique that directly produces a complete high-fidelity head model with consistent facial mesh topology. While existing techniques decouple shape estimation and facial tracking, our framework jointly optimizes for stereo constraints and consistent mesh parameterization. Our method is therefore free from drift and fully parallelizable for dynamic facial performance capture. We produce highly detailed facial geometries with artist-quality UV parameterization, including secondary elements such as eyeballs, mouth pockets, nostrils, and the back of the head. Our approach consists of deforming a common template model to match multi-view input images of the subject, while satisfying cross-view, cross-subject, and cross-pose consistencies using a combination of 2D landmark detection, optical flow, and surface and volumetric Laplacian regularization. Since the flow is never computed between frames, our method is trivially parallelized by processing each frame independently. Accurate rigid head pose is extracted using a PCA-based dimension reduction and denoising scheme. We demonstrate high-fidelity performance capture results with challenging head motion and complex facial expressions around eye and mouth regions. While the quality of our results is on par with the current state-of-the-art, our approach can be fully parallelized, does not suffer from drift, and produces face models with production-quality mesh topologies.Item Performance-Based Biped Control using a Consumer Depth Camera(The Eurographics Association and John Wiley & Sons Ltd., 2017) Lee, Yoonsang; Kwon, Taesoo; Loic Barthe and Bedrich BenesWe present a technique for controlling physically simulated characters using user inputs from an off-the-shelf depth camera. Our controller takes a real-time stream of user poses as input, and simulates a stream of target poses of a biped based on it. The simulated biped mimics the user's actions while moving forward at a modest speed and maintaining balance. The controller is parameterized over a set of modulated reference motions that aims to cover the range of possible user actions. For real-time simulation, the best set of control parameters for the current input pose is chosen from the parameterized sets of pre-computed control parameters via a regression method. By applying the chosen parameters at each moment, the simulated biped can imitate a range of user actions while walking in various interactive scenarios.Item Real-Time Multi-View Facial Capture with Synthetic Training(The Eurographics Association and John Wiley & Sons Ltd., 2017) Klaudiny, Martin; McDonagh, Steven; Bradley, Derek; Beeler, Thabo; Mitchell, Kenny; Loic Barthe and Bedrich BenesWe present a real-time multi-view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high-quality markerless facial performance capture in real-time from multi-view helmet camera data, employing an actor specific regressor. The regressor training is tailored to specified actor appearance and we further condition it for the expected illumination conditions and the physical capture rig by generating the training data synthetically. In order to leverage the information present in live imagery, which is typically provided by multiple cameras, we propose a novel multi-view regression algorithm that uses multi-dimensional random ferns. We show that higher quality can be achieved by regressing on multiple video streams than previous approaches that were designed to operate on only a single view. Furthermore, we evaluate possible camera placements and propose a novel camera configuration that allows to mount cameras outside the field of view of the actor, which is very beneficial as the cameras are then less of a distraction for the actor and allow for an unobstructed line of sight to the director and other actors. Our new real-time facial capture approach has immediate application in on-set virtual production, in particular with the ever-growing demand for motion-captured facial animation in visual effects and video games.Item Simulation-Ready Hair Capture(The Eurographics Association and John Wiley & Sons Ltd., 2017) Hu, Liwen; Bradley, Derek; Li, Hao; Beeler, Thabo; Loic Barthe and Bedrich BenesPhysical simulation has long been the approach of choice for generating realistic hair animations in CG. A constant drawback of simulation, however, is the necessity to manually set the physical parameters of the simulation model in order to get the desired dynamic behavior. To alleviate this, researchers have begun to explore methods for reconstructing hair from the real world and even to estimate the corresponding simulation parameters through the process of inversion. So far, however, these methods have had limited applicability, because dynamic hair capture can only be played back without the ability to edit, and solving for simulation parameters can only be accomplished for static hairstyles, ignoring the dynamic behavior. We present the first method for capturing dynamic hair and automatically determining the physical properties for simulating the observed hairstyle in motion. Since our dynamic inversion is agnostic to the simulation model, the proposed method applies to virtually any hair simulation technique, which we demonstrate using two state-of-the-art hair simulation models. The output of our method is a fully simulation-ready hairstyle, consisting of both the static hair geometry as well as its physical properties. The hairstyle can be easily edited by adding additional external forces, changing the head motion, or re-simulating in completely different environments, all while remaining faithful to the captured hairstyle.Item Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs(The Eurographics Association and John Wiley & Sons Ltd., 2017) Marcard, Timo von; Rosenhahn, Bodo; Black, Michael J.; Pons-Moll, Gerard; Loic Barthe and Bedrich BenesWe address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under-constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: (i) making use of a realistic statistical body model that includes anthropometric constraints and (ii) using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser (SIP) enables motion capture using only 6 sensors (attached to the wrists, lower legs, back and head) and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data.We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall.Item Sparse Rig Parameter Optimization for Character Animation(The Eurographics Association and John Wiley & Sons Ltd., 2017) Song, Jaewon; Ribera, Roger Blanco i; Cho, Kyungmin; You, Mi; Lewis, J. P.; Choi, Byungkuk; Noh, Junyong; Loic Barthe and Bedrich BenesWe propose a novel motion retargeting method that efficiently estimates artist-friendly rig space parameters. Inspired by the workflow typically observed in keyframe animation, our approach transfers a source motion into a production friendly character rig by optimizing the rig space parameters while balancing the considerations of fidelity to the source motion and the ease of subsequent editing. We propose the use of an intermediate object to transfer both the skeletal motion and the mesh deformation. The target rig-space parameters are then optimized to minimize the error between the motion of an intermediate object and the target character. The optimization uses a set of artist defined weights to modulate the effect of the different rig space parameters over time. Sparsity inducing regularizers and keyframe extraction streamline any additional editing processes. The results obtained with different types of character rigs demonstrate the versatility of our method and its effectiveness in simplifying any necessary manual editing within the production pipeline.