SCA 05: Eurographics/SIGGRAPH Symposium on Computer Animation
Permanent URI for this collection
Browse
Browsing SCA 05: Eurographics/SIGGRAPH Symposium on Computer Animation by Subject "Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Animation"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item AER: Aesthetic Exploration and Refinement for Expressive Character Animation(The Eurographics Association, 2005) Neff, Michael; Fiume, Eugene; D. Terzopoulos and V. Zordan and K. Anjyo and P. FaloutsosOur progress in the problem of making animated characters move expressively has been slow, and it persists in being among the most challenging in computer graphics. Simply attending to the low-level motion control problem, particularly for physically based models, is very difficult. Providing an animator with the tools to imbue character motion with broad expressive qualities is even more ambitious, but it is clear it is a goal to which we must aspire. Part of the problem is simply finding the right language in which to express qualities of motion. Another important issue is that expressive animation often involves many disparate parts of the body, which thwarts bottom-up controller synthesis. We demonstrate progress in this direction through the specification of directed, expressive animation over a limited range of standing movements. A key contribution is that through the use of high-level concepts such as character sketches, actions and properties, which impose different modalities of character behaviour, we are able to create many different animated interpretations of the same script. These tools support both rapid exploration of the aesthetic space and detailed refinement. Basic character actions and properties are distilled from an extensive search in the performing arts literature. We demonstrate how all highlevel constructions for expressive animation can be given a precise semantics that translate into a low-level motion specification that is then simulated either physically or kinematically. Our language and system can act as a bridge across artistic and technical communities to resolve ambiguities regarding the language of motion.We demonstrate our results through an implementation and various examples.Item Animosaics(The Eurographics Association, 2005) Smithy, Kaleigh; Liuz, Yunjun; Klein, Allison; D. Terzopoulos and V. Zordan and K. Anjyo and P. FaloutsosAnimated mosaics are a traditional form of stop-motion animation created by arranging and rearranging small objects or tiles from frame to frame. While this animation style is uniquely compelling, the traditional process of manually placing and then moving tiles in each frame is time-consuming and labourious. Recent work has proposed algorithms for static mosaics, but generating temporally coherent mosaic animations has remained open. In addition, previous techniques for temporal coherence allow non-photorealistic primitives to layer, blend, deform, or scale, techniques that are unsuitable for mosaic animations. This paper presents a new approach to temporal coherence and applies this to build a method for creating mosaic animations. Specifically, we characterize temporal coherence as the coordinated movement of groups of primitives. We describe a system for achieving this coordinated movement to create temporally coherent geometric packings of 2D shapes over time. We also show how to create static mosaics comprised of different tile shapes using area-based centroidal Voronoi diagramsItem An Efficient Search Algorithm for Motion Data Using Weighted PCA(The Eurographics Association, 2005) Forbes, K.; Fiume, E.; D. Terzopoulos and V. Zordan and K. Anjyo and P. FaloutsosGood motion data is costly to create. Such an expense often makes the reuse of motion data through transformation and retargetting a more attractive option than creating new motion from scratch. Reuse requires the ability to search automatically and efficiently a growing corpus of motion data, which remains a difficult open problem. We present a method for quickly searching long, unsegmented motion clips for subregions that most closely match a short query clip. Our search algorithm is based on a weighted PCA-based pose representation that allows for flexible and efficient pose-to-pose distance calculations. We present our pose representation and the details of the search algorithm. We evaluate the performance of a prototype search application using both synthetic and captured motion data. Using these results, we propose ways to improve the application s performance. The results inform a discussion of the algorithm s good scalability characteristics.Item Fast and accurate goal-directed motion synthesis for crowds(The Eurographics Association, 2005) Sung, Mankyu; Kovar, Lucas; Gleicher, Michael; D. Terzopoulos and V. Zordan and K. Anjyo and P. FaloutsosThis paper presents a highly efficient motion synthesis algorithm that is well suited for animating large numbers of characters. Given constraints that require characters to be in specific poses, positions, and orientations in specified time intervals, our algorithm synthesizes motions that exactly satisfy these constraints while avoiding inter-character collisions and collisions with the environment. We represent the space of possible actions with a motion graph and use search algorithms to generate motion. To provide a good initial guess for the search, we employ a fast path planner based on probabilistic roadmaps to navigate characters through complex environments. Also, unlike existing algorithms, our search process allows for smooth, continual adjustments to position, orientation, and timing. This allows us both to satisfy constraints precisely and to generate motion much faster than would otherwise be possible.Item Motion Modeling for On-Line Locomotion Synthesis(The Eurographics Association, 2005) Kwon, Taesoo; Shiny, Sung Yong; D. Terzopoulos and V. Zordan and K. Anjyo and P. FaloutsosIn this paper, we propose an example-based approach to on-line locomotion synthesis. Our approach consists of two parts: motion analysis and motion synthesis. In the motion analysis part, an unlabeled motion sequence is first decomposed into motion segments, exploiting the behavior of the COM (center of mass) trajectory of the performer. Those motion segments are subsequently classified into groups of motion segments such that the same group of motion segments share an identical footstep pattern. Finally, we construct a hierarchical motion transition graph by representing these groups and their connectivity to other groups as nodes and edges, respectively. The coarse level of this graph models locomotive motions and their transitions, and the fine level mainly captures the cyclic nature of locomotive motions. In the motion synthesis part, given a stream of motion specifications in an on-line manner, the motion transition graph is traversed while blending the motion segments to synthesize a motion at a node, one by one, guided by the motion specifications. Our main contributions are the motion labeling scheme and a new motion model, embodied by the hierarchical motion transition graph, which together enable not only artifact-free motion blending but also seamless motion transition.Item Simple and efficient compression of animation sequences(The Eurographics Association, 2005) Sattler, Mirko; Sarlette, Ralf; Klein, Reinhard; D. Terzopoulos and V. Zordan and K. Anjyo and P. FaloutsosWe present a new geometry compression method for animations, which is based on the clustered principal component analysis (CPCA). Instead of analyzing the set of vertices for each frame, our method analyzes the set of paths for all vertices for a certain animation length. Thus, using a data-driven approach, it can identify mesh parts, that are "coherent" over time. This usually leads to a very efficient and robust segmentation of the mesh into meaningful clusters, e.g. the wings of a chicken. These parts are then compressed separately using standard principal component analysis (PCA). Each of this clusters can be compressed more efficiently with lesser PCA components compared to previous approaches. Results show, that the new method outperforms other compression schemes like pure PCA based compression or combinations with linear prediction coding, while maintaining a better reconstruction error. This is true, even if the components and weights are quantized before transmission. The reconstruction process is very simple and can be performed directly on the GP.Item Transferable Videorealistic Speech Animation(The Eurographics Association, 2005) Chang, Yao-Jen; Ezzat, Tony; D. Terzopoulos and V. Zordan and K. Anjyo and P. FaloutsosImage-based videorealistic speech animation achieves significant visual realism at the cost of the collection of a large 5- to 10-minute video corpus from the specific person to be animated. This requirement hinders its use in broad applications, since a large video corpus for a specific person under a controlled recording setup may not be easily obtained. In this paper, we propose a model transfer and adaptation algorithm which allows for a novel person to be animated using only a small video corpus. The algorithm starts with a multidimensional morphable model (MMM) previously trained from a different speaker with a large corpus, and transfers it to the novel speaker with a much smaller corpus. The algorithm consists of 1) a novel matching-by-synthesis algorithm which semi-automatically selects new MMM prototype images from the new video corpus and 2) a novel gradient descent linear regression algorithm which adapts the MMM phoneme models to the data in the novel video corpus. Encouraging experimental results are presented in which a morphable model trained from a performer with a 10- minute corpus is transferred to a novel person using a 15-second movie clip of him as the adaptation video corpus.Item Video-Based Character Animation(The Eurographics Association, 2005) Starck, J.; Miller, G.; Hilton, A.; D. Terzopoulos and V. Zordan and K. Anjyo and P. FaloutsosIn this paper we introduce a video-based representation for free viewpoint visualization and motion control of 3D character models created from multiple view video sequences of real people. Previous approaches to videobased rendering provide no control of scene dynamics to manipulate, retarget, and create new 3D content from captured scenes. Here we contribute a new approach, combining image based reconstruction and video-based animation to allow controlled animation of people from captured multiple view video sequences. We represent a character as a motion graph of free viewpoint video motions for animation control. We introduce the use of geometry videos to represent reconstructed scenes of people for free viewpoint video rendering. We describe a novel spherical matching algorithm to derive global surface to surface correspondence in spherical geometry images for motion blending and the construction of seamless transitions between motion sequences. Finally, we demonstrate interactive video-based character animation with real-time rendering and free viewpoint visualization. This approach synthesizes highly realistic character animations with dynamic surface shape and appearance captured from multiple view video of people.