Browsing by Author "Seo, Kwanggyoon"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Generating Texture for 3D Human Avatar from a Single Image using Sampling and Refinement Networks(The Eurographics Association and John Wiley & Sons Ltd., 2023) Cha, Sihun; Seo, Kwanggyoon; Ashtari, Amirsaman; Noh, Junyong; Myszkowski, Karol; Niessner, MatthiasThere has been significant progress in generating an animatable 3D human avatar from a single image. However, recovering texture for the 3D human avatar from a single image has been relatively less addressed. Because the generated 3D human avatar reveals the occluded texture of the given image as it moves, it is critical to synthesize the occluded texture pattern that is unseen from the source image. To generate a plausible texture map for 3D human avatars, the occluded texture pattern needs to be synthesized with respect to the visible texture from the given image. Moreover, the generated texture should align with the surface of the target 3D mesh. In this paper, we propose a texture synthesis method for a 3D human avatar that incorporates geometry information. The proposed method consists of two convolutional networks for the sampling and refining process. The sampler network fills in the occluded regions of the source image and aligns the texture with the surface of the target 3D mesh using the geometry information. The sampled texture is further refined and adjusted by the refiner network. To maintain the clear details in the given image, both sampled and refined texture is blended to produce the final texture map. To effectively guide the sampler network to achieve its goal, we designed a curriculum learning scheme that starts from a simple sampling task and gradually progresses to the task where the alignment needs to be considered. We conducted experiments to show that our method outperforms previous methods qualitatively and quantitatively.Item StylePortraitVideo: Editing Portrait Videos with Expression Optimization(The Eurographics Association and John Wiley & Sons Ltd., 2022) Seo, Kwanggyoon; Oh, Seoung Wug; Lu, Jingwan; Lee, Joon-Young; Kim, Seonghyeon; Noh, Junyong; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtienneHigh-quality portrait image editing has been made easier by recent advances in GANs (e.g., StyleGAN) and GAN inversion methods that project images onto a pre-trained GAN's latent space. However, extending the existing image editing methods, it is hard to edit videos to produce temporally coherent and natural-looking videos. We find challenges in reproducing diverse video frames and preserving the natural motion after editing. In this work, we propose solutions for these challenges. First, we propose a video adaptation method that enables the generator to reconstruct the original input identity, unusual poses, and expressions in the video. Second, we propose an expression dynamics optimization that tweaks the latent codes to maintain the meaningful motion in the original video. Based on these methods, we build a StyleGAN-based high-quality portrait video editing system that can edit videos in the wild in a temporally coherent way at up to 4K resolution.