SCA 13: Eurographics/SIGGRAPH Symposium on Computer Animation
Permanent URI for this collection
Browse
Browsing SCA 13: Eurographics/SIGGRAPH Symposium on Computer Animation by Subject "animation"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item One-to-Many: Example-Based Mesh Animation Synthesis(ACM SIGGRAPH / Eurographics Association, 2013) Zheng, Changxi; Theodore Kim and Robert SumnerWe propose an example-based approach for synthesizing diverse mesh animations. Provided a short clip of deformable mesh animation, our method synthesizes a large number of different animations of arbitrary length. Combining an automatically inferred linear blending skinning (LBS) model with a PCA-based model reduction, our method identifies possible smooth transitions in the example sequence. To create smooth transitions, we synthesize reduced deformation parameters based on a set of characteristic key vertices on the mesh. Furthermore, by analyzing cut nodes on a graph built upon the LBS model, we are able to decompose the mesh into independent components. Motions of these components are synthesized individually and assembled together. Our method has the complexity independent from mesh resolutions, enabling efficient generation of arbitrarily long animations without tedious parameter tuning and heavy computation. We evaluate our method on various animation examples, and demonstrate that numerous diverse animations can be generated from each single example.Item Virtual Character Performance From Speech(ACM SIGGRAPH / Eurographics Association, 2013) Marsella, Stacy; Xu, Yuyu; Lhommet, Margaux; Feng, Andrew; Scherer, Stefan; Shapirok, Ari; Theodore Kim and Robert SumnerWe demonstrate a method for generating a 3D virtual character performance from the audio signal by inferring the acoustic and semantic properties of the utterance. Through a prosodic analysis of the acoustic signal, we perform an analysis for stress and pitch, relate it to the spoken words and identify the agitation state. Our rule-based system performs a shallow analysis of the utterance text to determine its semantic, pragmatic and rhetorical content. Based on these analyses, the system generates facial expressions and behaviors including head movements, eye saccades, gestures, blinks and gazes. Our technique is able to synthesize the performance and generate novel gesture animations based on coarticulation with other closely scheduled animations. Because our method utilizes semantics in addition to prosody, we are able to generate virtual character performances that are more appropriate than methods that use only prosody. We perform a study that shows that our technique outperforms methods that use prosody alone.