Robust Diffusion-based Motion In-betweening
dc.contributor.author | Qin, Jia | en_US |
dc.contributor.author | Yan, Peng | en_US |
dc.contributor.author | An, Bo | en_US |
dc.contributor.editor | Chen, Renjie | en_US |
dc.contributor.editor | Ritschel, Tobias | en_US |
dc.contributor.editor | Whiting, Emily | en_US |
dc.date.accessioned | 2024-10-13T18:10:00Z | |
dc.date.available | 2024-10-13T18:10:00Z | |
dc.date.issued | 2024 | |
dc.description.abstract | The emergence of learning-based motion in-betweening techniques offers animators a more efficient way to animate characters. However, existing non-generative methods either struggle to support long transition generation or produce results that lack diversity. Meanwhile, diffusion models have shown promising results in synthesizing diverse and high-quality motions driven by text and keyframes. However, in these methods, keyframes often serve as a guide rather than a strict constraint and can sometimes be ignored when keyframes are sparse. To address these issues, we propose a lightweight yet effective diffusionbased motion in-betweening framework that generates animations conforming to keyframe constraints.We incorporate keyframe constraints into the training phase to enhance robustness in handling various constraint densities. Moreover, we employ relative positional encoding to improve the model's generalization on long range in-betweening tasks. This approach enables the model to learn from short animations while generating realistic in-betweening motions spanning thousands of frames. We conduct extensive experiments to validate our framework using the newly proposed metrics K-FID, K-Diversity, and K-Error, designed to evaluate generative in-betweening methods. Results demonstrate that our method outperforms existing diffusion-based methods across various lengths and keyframe densities. We also show that our method can be applied to text-driven motion synthesis, offering fine-grained control over the generated results. | en_US |
dc.description.number | 7 | |
dc.description.sectionheaders | Human II | |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.volume | 43 | |
dc.identifier.doi | 10.1111/cgf.15260 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.pages | 11 pages | |
dc.identifier.uri | https://doi.org/10.1111/cgf.15260 | |
dc.identifier.uri | https://diglib.eg.org/handle/10.1111/cgf15260 | |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.rights | Attribution 4.0 International License | |
dc.subject | Computing methodologies → Motion capture | |
dc.subject | Neural networks | |
dc.title | Robust Diffusion-based Motion In-betweening | en_US |