Browsing by Author "Chang, Jian"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item DFGA: Digital Human Faces Generation and Animation from the RGB Video using Modern Deep Learning Technology(The Eurographics Association, 2022) Jiang, Diqiong; You, Lihua; Chang, Jian; Tong, Ruofeng; Yang, Yin; Parakkat, Amal D.; Deng, Bailin; Noh, Seung-TakHigh-quality and personalized digital human faces have been widely used in media and entertainment, from film and game production to virtual reality. However, the existing technology of generating digital faces requires extremely intensive labor, which prevents the large-scale popularization of digital face technology. In order to tackle this problem, the proposed research will investigate deep learning-based facial modeling and animation technologies to 1) create personalized face geometry from a single image, including the recognizable neutral face shape and believable personalized blendshapes; (2) generate personalized production-level facial skin textures from a video or image sequence; (3) automatically drive and animate a 3D target avatar by an actor's 2D facial video or audio. Our innovation is to achieve these tasks both efficiently and precisely by using the end-to-end framework with modern deep learning technology (StyleGAN, Transformer, NeRF).Item DiffusionPointLabel: Annotated Point Cloud Generation with Diffusion Model(The Eurographics Association and John Wiley & Sons Ltd., 2022) Li, Tingting; Fu, Yunfei; Han, Xiaoguang; Liang, Hui; Zhang, Jian Jun; Chang, Jian; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtiennePoint cloud generation aims to synthesize point clouds that do not exist in supervised dataset. Generating a point cloud with certain semantic labels remains an under-explored problem. This paper proposes a formulation called DiffusionPointLabel, which completes point-label pair generation based on a DDPM generative model (Denoising Diffusion Probabilistic Model). Specifically, we use a point cloud diffusion generative model and aggregate the intermediate features of the generator. On top of this, we propose Feature Interpreter that transforms intermediate features into semantic labels. Furthermore, we employ an uncertainty measure to filter unqualified point-label pairs for a better quality of generated point cloud dataset. Coupling these two designs enables us to automatically generate annotated point clouds, especially when supervised point-labels pairs are scarce. Our method extends the application of point cloud generation models and surpasses state-of-the-art models.Item Generating High-quality Superpixels in Textured Images(The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhang, Zhe; Xu, Panpan; Chang, Jian; Wang, Wencheng; Zhao, Chong; Zhang, Jian Jun; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueSuperpixel segmentation is important for promoting various image processing tasks. However, existing methods still have difficulties in generating high-quality superpixels in textured images, because they cannot separate textures from structures well. Though texture filtering can be adopted for smoothing textures before superpixel segmentation, the filtering would also smooth the object boundaries, and thus weaken the quality of generated superpixels. In this paper, we propose to use the adaptive scale box smoothing instead of the texture filtering to obtain more high-quality texture and boundary information. Based on this, we design a novel distance metric to measure the distance between different pixels, which considers boundary, color and Euclidean distance simultaneously. As a result, our method can achieve high-quality superpixel segmentation in textured images without texture filtering. The experimental results demonstrate the superiority of our method over existing methods, even the learning-based methods. Benefited from using boundaries to guide superpixel segmentation, our method can also suppress noise to generate high-quality superpixels in non-textured images.