Browsing by Author "Ghorbani, Saeed"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Probabilistic Character Motion Synthesis using a Hierarchical Deep Latent Variable Model(The Eurographics Association and John Wiley & Sons Ltd., 2020) Ghorbani, Saeed; Wloka, Calden; Etemad, Ali; Brubaker, Marcus A.; Troje, Nikolaus F.; Bender, Jan and Popa, TiberiuWe present a probabilistic framework to generate character animations based on weak control signals, such that the synthesized motions are realistic while retaining the stochastic nature of human movement. The proposed architecture, which is designed as a hierarchical recurrent model, maps each sub-sequence of motions into a stochastic latent code using a variational autoencoder extended over the temporal domain. We also propose an objective function which respects the impact of each joint on the pose and compares the joint angles based on angular distance. We use two novel quantitative protocols and human qualitative assessment to demonstrate the ability of our model to generate convincing and diverse periodic and non-periodic motion sequences without the need for strong control signals.Item ZeroEGGS: Zero‐shot Example‐based Gesture Generation from Speech(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Ghorbani, Saeed; Ferstl, Ylva; Holden, Daniel; Troje, Nikolaus F.; Carbonneau, Marc‐André; Hauser, Helwig and Alliez, PierreWe present ZeroEGGS, a neural network framework for speech‐driven gesture generation with zero‐shot style control by example. This means style can be controlled via only a short example motion clip, even for motion styles unseen during training. Our model uses a Variational framework to learn a style embedding, making it easy to modify style through latent space manipulation or blending and scaling of style embeddings. The probabilistic nature of our framework further enables the generation of a variety of outputs given the input, addressing the stochastic nature of gesture motion. In a series of experiments, we first demonstrate the flexibility and generalizability of our model to new speakers and styles. In a user study, we then show that our model outperforms previous state‐of‐the‐art techniques in naturalness of motion, appropriateness for speech, and style portrayal. Finally, we release a high‐quality dataset of full‐body gesture motion including fingers, with speech, spanning across 19 different styles. Our code and data are publicly available at .