Browsing by Author "Kanamori, Yoshihiro"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Diversifying Semantic Image Synthesis and Editing via Class- and Layer-wise VAEs(The Eurographics Association and John Wiley & Sons Ltd., 2020) Endo, Yuki; Kanamori, Yoshihiro; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueSemantic image synthesis is a process for generating photorealistic images from a single semantic mask. To enrich the diversity of multimodal image synthesis, previous methods have controlled the global appearance of an output image by learning a single latent space. However, a single latent code is often insufficient for capturing various object styles because object appearance depends on multiple factors. To handle individual factors that determine object styles, we propose a class- and layer-wise extension to the variational autoencoder (VAE) framework that allows flexible control over each object class at the local to global levels by learning multiple latent spaces. Furthermore, we demonstrate that our method generates images that are both plausible and more diverse compared to state-of-the-art methods via extensive experiments with real and synthetic datasets in three different domains. We also show that our method enables a wide range of applications in image synthesis and editing tasks.Item Makeup Extraction of 3D Representation via Illumination-Aware Image Decomposition(The Eurographics Association and John Wiley & Sons Ltd., 2023) Yang, Xingchao; Taketomi, Takafumi; Kanamori, Yoshihiro; Myszkowski, Karol; Niessner, MatthiasFacial makeup enriches the beauty of not only real humans but also virtual characters; therefore, makeup for 3D facial models is highly in demand in productions. However, painting directly on 3D faces and capturing real-world makeup are costly, and extracting makeup from 2D images often struggles with shading effects and occlusions. This paper presents the first method for extracting makeup for 3D facial models from a single makeup portrait. Our method consists of the following three steps. First, we exploit the strong prior of 3D morphable models via regression-based inverse rendering to extract coarse materials such as geometry and diffuse/specular albedos that are represented in the UV space. Second, we refine the coarse materials, which may have missing pixels due to occlusions. We apply inpainting and optimization. Finally, we extract the bare skin, makeup, and an alpha matte from the diffuse albedo. Our method offers various applications for not only 3D facial models but also 2D portrait images. The extracted makeup is well-aligned in the UV space, from which we build a large-scale makeup dataset and a parametric makeup model for 3D faces. Our disentangled materials also yield robust makeup transfer and illumination-aware makeup interpolation/removal without a reference image.Item Relighting Humans in the Wild: Monocular Full-Body Human Relighting with Domain Adaptation(The Eurographics Association and John Wiley & Sons Ltd., 2021) Tajima, Daichi; Kanamori, Yoshihiro; Endo, Yuki; Zhang, Fang-Lue and Eisemann, Elmar and Singh, KaranThe modern supervised approaches for human image relighting rely on training data generated from 3D human models. However, such datasets are often small (e.g., Light Stage data with a small number of individuals) or limited to diffuse materials (e.g., commercial 3D scanned human models). Thus, the human relighting techniques suffer from the poor generalization capability and synthetic-to-real domain gap. In this paper, we propose a two-stage method for single-image human relighting with domain adaptation. In the first stage, we train a neural network for diffuse-only relighting. In the second stage, we train another network for enhancing non-diffuse reflection by learning residuals between real photos and images reconstructed by the diffuse-only network. Thanks to the second stage, we can achieve higher generalization capability against various cloth textures, while reducing the domain gap. Furthermore, to handle input videos, we integrate illumination-aware deep video prior to greatly reduce flickering artifacts even with challenging settings under dynamic illuminations.Item Single-View Modeling of Layered Origami with Plausible Outer Shape(The Eurographics Association and John Wiley & Sons Ltd., 2019) Kato, Yuya; Tanaka, Shinichi; Kanamori, Yoshihiro; Mitani, Jun; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonModeling 3D origami pieces using conventional software is laborious due to the geometric constraints imposed by the complicated layered structure. Targeting origami models used in visual content such as CG illustrations and movies, we propose an interactive system that dramatically simplifies the modeling of 3D origami pieces with plausible outer shapes, while omitting accurate inner structures. By focusing on flat origami models with a front-and-back symmetry commonly found in traditional artworks, our system realizes easy and quick modeling via single-view interface; given a reference image of the target origami piece, the user draws polygons of planar faces onto the image, and assigns annotations indicating the types of folding operations. Our system automatically rectifies the manually-specified polygons, infers the folded structures that should yield the user-specified polygons with reference to the depth order of layered polygons, and generates a plausible 3D model while accounting for gaps between layers. Our system is versatile enough for modeling pseudo-origami models that are not realizable by folding a single sheet of paper. Our user study demonstrates that even novice users without the specialized knowledge and experience on origami and 3D modeling can create plausible origami models quickly.