Dynamic, Expressive Speech Animation From a Single Mesh

dc.contributor.authorWampler, Kevinen_US
dc.contributor.authorSasaki, Daichien_US
dc.contributor.authorZhang, Lien_US
dc.contributor.authorPopovic, Zoranen_US
dc.contributor.editorDimitris Metaxas and Jovan Popovicen_US
dc.date.accessioned2014-01-29T07:27:27Z
dc.date.available2014-01-29T07:27:27Z
dc.date.issued2007en_US
dc.description.abstractIn this work we present a method for human face animation which allows us to generate animations for a novel person given just a single mesh of their face. These animations can be of arbitrary text and may include emotional expressions. We build a multilinear model from data which encapsulates the variation in dynamic face motions over changes in identity, expression, and over different texts. We then describe a synthesis method consisting of a phoneme planning and a blending stage which uses this model as a base and attempts to preserve both face shape and dynamics given a novel text and an emotion at each point in time.en_US
dc.description.seriesinformationEurographics/SIGGRAPH Symposium on Computer Animationen_US
dc.identifier.isbn978-3-905673-44-9en_US
dc.identifier.issn1727-5288en_US
dc.identifier.urihttps://doi.org/10.2312/SCA/SCA07/053-062en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectCategories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Animationen_US
dc.titleDynamic, Expressive Speech Animation From a Single Meshen_US
Files