Unsupervised Learning for Speech Motion Editing
dc.contributor.author | Cao, Yong | en_US |
dc.contributor.author | Faloutsos, Petros | en_US |
dc.contributor.author | Pighin, Frédéric | en_US |
dc.contributor.editor | D. Breen and M. Lin | en_US |
dc.date.accessioned | 2014-01-29T06:32:26Z | |
dc.date.available | 2014-01-29T06:32:26Z | |
dc.date.issued | 2003 | en_US |
dc.description.abstract | We present a new method for editing speech related facial motions. Our method uses an unsupervised learning technique, Independent Component Analysis (ICA), to extract a set of meaningful parameters without any annotation of the data. With ICA, we are able to solve a blind source separation problem and describe the original data as a linear combination of two sources. One source captures content (speech) and the other captures style (emotion). By manipulating the independent components we can edit the motions in intuitive ways. | en_US |
dc.description.seriesinformation | Symposium on Computer Animation | en_US |
dc.identifier.isbn | 1-58113-659-5 | en_US |
dc.identifier.issn | 1727-5288 | en_US |
dc.identifier.uri | https://doi.org/10.2312/SCA03/225-231 | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | Unsupervised Learning for Speech Motion Editing | en_US |