Machine Learning For Plausible Gesture Generation From Speech For Virtual Humans
dc.contributor.author | Ferstl, Ylva | |
dc.date.accessioned | 2022-01-03T07:01:45Z | |
dc.date.available | 2022-01-03T07:01:45Z | |
dc.date.issued | 2021-08-03 | |
dc.description.abstract | The growing use of virtual humans in an array of applications such as games, human-computer interfaces, and virtual reality demands the design of appealing and engaging characters, while minimizing the cost and time of creation. Nonverbal behavior is an integral part of human communication and important for believable embodied virtual agents. Co-speech gesture represents a key aspect of nonverbal communication and virtual agents are more engaging when exhibiting gesture behavior. Hand-animation of gesture is costly and does not scale to applications where agents may produce new utterances after deployment. Automatized gesture generation is therefore attractive, enabling any new utterance to be animated on the go. A major body of research has been dedicated to methods of automatic gesture generation, but generating expressive and defined gesture motion has commonly relied on explicit formulation of if-then rules or probabilistic modelling of annotated features. Able to work on unlabelled data, machine learning approaches are catching up, however, they often still produce averaged motion failing to capture the speech-gesture relationship adequately. The results from machine-learned models point to the high complexity of the speech-to-motion learning task. In this work, we explore a number of machine learning methods for improving the speech-to-motion learning outcome, including the use of transfer learning from speech and motion models, adversarial training, as well as modelling explicit expressive gesture parameters from speech. We develop a method for automatically segmenting individual gestures from a motion stream, enabling detailed analysis of the speech-gesture relationship. We present two large multimodal datasets of conversational speech and motion, designed specifically for this modelling problem. We finally present and evaluate a novel speech-to-gesture system, merging methods of machine learning and database sampling. | en_US |
dc.description.sponsorship | Science Foundation Ireland (SFI) | en_US |
dc.identifier.citation | Ferstl, Ylva, Machine Learning For Plausible Gesture Generation From Speech For Virtual Humans, Trinity College Dublin.School of Computer Science & Statistics, 2021 | en_US |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/2633145 | |
dc.language.iso | en | en_US |
dc.publisher | Trinity College Dublin, The University of Dublin | en_US |
dc.subject | gesture generation | en_US |
dc.subject | computer animation | en_US |
dc.subject | motion modelling | en_US |
dc.subject | machine learning | en_US |
dc.subject | conversational agents | en_US |
dc.subject | co-speech gesture | en_US |
dc.title | Machine Learning For Plausible Gesture Generation From Speech For Virtual Humans | en_US |
dc.type | Animation | en_US |
dc.type | Thesis | en_US |
Files
Original bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- Thesis_twosided.pdf
- Size:
- 10.42 MB
- Format:
- Adobe Portable Document Format
- Description:
- full thesis, two-sided
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.79 KB
- Format:
- Item-specific license agreed upon to submission
- Description: