Browsing by Author "Taketomi, Takafumi"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item BareSkinNet: De-makeup and De-lighting via 3D Face Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2022) Yang, Xingchao; Taketomi, Takafumi; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtienneWe propose BareSkinNet, a novel method that simultaneously removes makeup and lighting influences from the face image. Our method leverages a 3D morphable model and does not require a reference clean face image or a specified light condition. By combining the process of 3D face reconstruction, we can easily obtain 3D geometry and coarse 3D textures. Using this information, we can infer normalized 3D face texture maps (diffuse, normal, roughness, and specular) by an image-translation network. Consequently, reconstructed 3D face textures without undesirable information will significantly benefit subsequent processes, such as re-lighting or re-makeup. In experiments, we show that BareSkinNet outperforms state-of-the-art makeup removal methods. In addition, our method is remarkably helpful in removing makeup to generate consistent high-fidelity texture maps, which makes it extendable to many realistic face generation applications. It can also automatically build graphic assets of face makeup images before and after with corresponding 3D data. This will assist artists in accelerating their work, such as 3D makeup avatar creation.Item Context-based Style Transfer of Tokenized Gestures(The Eurographics Association and John Wiley & Sons Ltd., 2022) Kuriyama, Shigeru; Mukai, Tomohiko; Taketomi, Takafumi; Mukasa, Tomoyuki; Dominik L. Michels; Soeren PirkGestural animations in the amusement or entertainment field often require rich expressions; however, it is still challenging to synthesize characteristic gestures automatically. Although style transfer based on a neural network model is a potential solution, existing methods mainly focus on cyclic motions such as gaits and require re-training in adding new motion styles. Moreover, their per-pose transformation cannot consider the time-dependent features, and therefore motion styles of different periods and timings are difficult to be transferred. This limitation is fatal for the gestural motions requiring complicated time alignment due to the variety of exaggerated or intentionally performed behaviors. This study introduces a context-based style transfer of gestural motions with neural networks to ensure stable conversion even for exaggerated, dynamically complicated gestures. We present a model based on a vision transformer for transferring gestures' content and style features by time-segmenting them to compose tokens in a latent space. We extend this model to yield the probability of swapping gestures' tokens for style-transferring. A transformer model is suited to semantically consistent matching among gesture tokens, owing to the correlation with spoken words. The compact architecture of our network model requires only a small number of parameters and computational costs, which is suitable for real-time applications with an ordinary device. We introduce loss functions provided by the restoration error of identically and cyclically transferred gesture tokens and the similarity losses of content and style evaluated by splicing features inside the transformer. This design of losses allows unsupervised and zero-shot learning, by which the scalability for motion data is obtained. We comparatively evaluated our style transfer method, mainly focusing on expressive gestures using our dataset captured for various scenarios and styles by introducing new error metrics tailored for gestures. Our experiment showed the superiority of our method in numerical accuracy and stability of style transfer against the existing methods.Item Garment Model Extraction from Clothed Mannequin Scan(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Gao, Qiqi; Taketomi, Takafumi; Hauser, Helwig and Alliez, PierreModelling garments with rich details require enormous time and expertise of artists. Recent works re‐construct garments through segmentation of clothed human scan. However, existing methods rely on certain human body templates and do not perform as well on loose garments such as skirts. This paper presents a two‐stage pipeline for extracting high‐fidelity garments from static scan data of clothed mannequins. Our key contribution is a novel method for tracking both tight and loose boundaries between garments and mannequin skin. Our algorithm enables the modelling of off‐the‐shelf clothing with fine details. It is independent of human template models and requires only minimal mannequin priors. The effectiveness of our method is validated through quantitative and qualitative comparison with the baseline method. The results demonstrate that our method can accurately extract both tight and loose garments within reasonable time.Item Makeup Extraction of 3D Representation via Illumination-Aware Image Decomposition(The Eurographics Association and John Wiley & Sons Ltd., 2023) Yang, Xingchao; Taketomi, Takafumi; Kanamori, Yoshihiro; Myszkowski, Karol; Niessner, MatthiasFacial makeup enriches the beauty of not only real humans but also virtual characters; therefore, makeup for 3D facial models is highly in demand in productions. However, painting directly on 3D faces and capturing real-world makeup are costly, and extracting makeup from 2D images often struggles with shading effects and occlusions. This paper presents the first method for extracting makeup for 3D facial models from a single makeup portrait. Our method consists of the following three steps. First, we exploit the strong prior of 3D morphable models via regression-based inverse rendering to extract coarse materials such as geometry and diffuse/specular albedos that are represented in the UV space. Second, we refine the coarse materials, which may have missing pixels due to occlusions. We apply inpainting and optimization. Finally, we extract the bare skin, makeup, and an alpha matte from the diffuse albedo. Our method offers various applications for not only 3D facial models but also 2D portrait images. The extracted makeup is well-aligned in the UV space, from which we build a large-scale makeup dataset and a parametric makeup model for 3D faces. Our disentangled materials also yield robust makeup transfer and illumination-aware makeup interpolation/removal without a reference image.Item Refinement of Hair Geometry by Strand Integration(The Eurographics Association and John Wiley & Sons Ltd., 2023) Maeda, Ryota; Takayama, Kenshi; Taketomi, Takafumi; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Reconstructing 3D hair is challenging due to its complex micro-scale geometry, and is of essential importance for the efficient creation of high-fidelity virtual humans. Existing hair capture methods based on multi-view stereo tend to generate results that are noisy and inaccurate. In this study, we propose a refinement method for hair geometry by incorporating the gradient of strands into the computation of their position. We formulate a gradient integration strategy for hair strands. We evaluate the performance of our method using a synthetic multi-view dataset containing four hairstyles, and show that our refinement produces more accurate hair geometry. Furthermore, we tested our method with a real image input. Our method produces a plausible result. Our source code is publicly available at https://github.com/elerac/strand_integration.