Dynamic, Expressive Speech Animation From a Single Mesh
dc.contributor.author | Wampler, Kevin | en_US |
dc.contributor.author | Sasaki, Daichi | en_US |
dc.contributor.author | Zhang, Li | en_US |
dc.contributor.author | Popovic, Zoran | en_US |
dc.contributor.editor | Dimitris Metaxas and Jovan Popovic | en_US |
dc.date.accessioned | 2014-01-29T07:27:27Z | |
dc.date.available | 2014-01-29T07:27:27Z | |
dc.date.issued | 2007 | en_US |
dc.description.abstract | In this work we present a method for human face animation which allows us to generate animations for a novel person given just a single mesh of their face. These animations can be of arbitrary text and may include emotional expressions. We build a multilinear model from data which encapsulates the variation in dynamic face motions over changes in identity, expression, and over different texts. We then describe a synthesis method consisting of a phoneme planning and a blending stage which uses this model as a base and attempts to preserve both face shape and dynamics given a novel text and an emotion at each point in time. | en_US |
dc.description.seriesinformation | Eurographics/SIGGRAPH Symposium on Computer Animation | en_US |
dc.identifier.isbn | 978-3-905673-44-9 | en_US |
dc.identifier.issn | 1727-5288 | en_US |
dc.identifier.uri | https://doi.org/10.2312/SCA/SCA07/053-062 | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.subject | Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Animation | en_US |
dc.title | Dynamic, Expressive Speech Animation From a Single Mesh | en_US |