Volume 44 (2025)
Permanent URI for this community
Browse
Browsing Volume 44 (2025) by Subject "animation"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Automatic Inbetweening for Stroke‐Based Painterly Animation(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Barroso, Nicolas; Fondevilla, Amélie; Vanderhaeghe, DavidPainterly 2D animation, like the paint‐on‐glass technique, is a tedious task performed by skilled artists, primarily using traditional manual methods. Although CG tools can simplify the creation process, previous works often focus on temporal coherence, which typically results in the loss of the handmade look and feel. In contrast to cartoon animation, where regions are typically filled with smooth gradients, stroke‐based stylized 2D animation requires careful consideration of how shapes are filled, as each stroke may be perceived individually. We propose a method to generate intermediate frames using example keyframes and a motion description. This method allows artists to create only one image for every five to 10 output images in the animation, while the automatically generated intermediate frames provide plausible inbetween frames.Item DeepFracture: A Generative Approach for Predicting Brittle Fractures with Neural Discrete Representation Learning(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Huang, Yuhang; Kanai, TakashiIn the field of brittle fracture animation, generating realistic destruction animations using physics‐based simulation methods is computationally expensive. While techniques based on Voronoi diagrams or pre‐fractured patterns are effective for real‐time applications, they fail to incorporate collision conditions when determining fractured shapes during runtime. This paper introduces a novel learning‐based approach for predicting fractured shapes based on collision dynamics at runtime. Our approach seamlessly integrates realistic brittle fracture animations with rigid body simulations, utilising boundary element method (BEM) brittle fracture simulations to generate training data. To integrate collision scenarios and fractured shapes into a deep learning framework, we introduce generative geometric segmentation, distinct from both instance and semantic segmentation, to represent 3D fragment shapes. We propose an eight‐dimensional latent code to address the challenge of optimising multiple discrete fracture pattern targets that share similar continuous collision latent codes. This code will follow a discrete normal distribution corresponding to a specific fracture pattern within our latent impulse representation design. This adaptation enables the prediction of fractured shapes using neural discrete representation learning. Our experimental results show that our approach generates considerably more detailed brittle fractures than existing techniques, while the computational time is typically reduced compared to traditional simulation methods at comparable resolutions.Item Deep‐Learning‐Based Facial Retargeting Using Local Patches(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Choi, Yeonsoo; Lee, Inyup; Cha, Sihun; Kim, Seonghyeon; Jung, Sunjin; Noh, JunyongIn the era of digital animation, the quest to produce lifelike facial animations for virtual characters has led to the development of various retargeting methods. While the retargeting facial motion between models of similar shapes has been very successful, challenges arise when the retargeting is performed on stylized or exaggerated 3D characters that deviate significantly from human facial structures. In this scenario, it is important to consider the target character's facial structure and possible range of motion to preserve the semantics assumed by the original facial motions after the retargeting. To achieve this, we propose a local patch‐based retargeting method that transfers facial animations captured in a source performance video to a target stylized 3D character. Our method consists of three modules. The Automatic Patch Extraction Module extracts local patches from the source video frame. These patches are processed through the Reenactment Module to generate correspondingly re‐enacted target local patches. The Weight Estimation Module calculates the animation parameters for the target character at every frame for the creation of a complete facial animation sequence. Extensive experiments demonstrate that our method can successfully transfer the semantic meaning of source facial expressions to stylized characters with considerable variations in facial feature proportion.