ZeroEGGS: Zero‐shot Example‐based Gesture Generation from Speech

Loading...
Thumbnail Image
Date
2023
Journal Title
Journal ISSN
Volume Title
Publisher
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.
Abstract
We present ZeroEGGS, a neural network framework for speech‐driven gesture generation with zero‐shot style control by example. This means style can be controlled via only a short example motion clip, even for motion styles unseen during training. Our model uses a Variational framework to learn a style embedding, making it easy to modify style through latent space manipulation or blending and scaling of style embeddings. The probabilistic nature of our framework further enables the generation of a variety of outputs given the input, addressing the stochastic nature of gesture motion. In a series of experiments, we first demonstrate the flexibility and generalizability of our model to new speakers and styles. In a user study, we then show that our model outperforms previous state‐of‐the‐art techniques in naturalness of motion, appropriateness for speech, and style portrayal. Finally, we release a high‐quality dataset of full‐body gesture motion including fingers, with speech, spanning across 19 different styles. Our code and data are publicly available at .
Description

        
@article{
10.1111:cgf.14734
, journal = {Computer Graphics Forum}, title = {{
ZeroEGGS: Zero‐shot Example‐based Gesture Generation from Speech
}}, author = {
Ghorbani, Saeed
 and
Ferstl, Ylva
 and
Holden, Daniel
 and
Troje, Nikolaus F.
 and
Carbonneau, Marc‐André
}, year = {
2023
}, publisher = {
Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd.
}, ISSN = {
1467-8659
}, DOI = {
10.1111/cgf.14734
} }
Citation
Collections