Browsing by Author "Xiao, Chunxia"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item CLA-GAN: A Context and Lightness Aware Generative Adversarial Network for Shadow Removal(The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhang, Ling; Long, Chengjiang; Yan, Qingan; Zhang, Xiaolong; Xiao, Chunxia; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueIn this paper, we propose a novel context and lightness aware Generative Adversarial Network (CLA-GAN) framework for shadow removal, which refines a coarse result to a final shadow removal result in a coarse-to-fine fashion. At the refinement stage, we first obtain a lightness map using an encoder-decoder structure. With the lightness map and the coarse result as the inputs, the following encoder-decoder tries to refine the final result. Specifically, different from current methods restricted pixel-based features from shadow images, we embed a context-aware module into the refinement stage, which exploits patch-based features. The embedded module transfers features from non-shadow regions to shadow regions to ensure the consistency in appearance in the recovered shadow-free images. Since we consider pathces, the module can additionally enhance the spatial association and continuity around neighboring pixels. To make the model pay more attention to shadow regions during training, we use dynamic weights in the loss function. Moreover, we augment the inputs of the discriminator by rotating images in different degrees and use rotation adversarial loss during training, which can make the discriminator more stable and robust. Extensive experiments demonstrate the validity of the components in our CLA-GAN framework. Quantitative evaluation on different shadow datasets clearly shows the advantages of our CLA-GAN over the state-of-the-art methods.Item Density-Aware Diffusion Model for Efficient Image Dehazing(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Ling; Bai, Wenxu; Xiao, Chunxia; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyExisting image dehazing methods have made remarkable progress. However, they generally perform poorly on images with dense haze, and often suffer from unsatisfactory results with detail degradation or color distortion. In this paper, we propose a density-aware diffusion model (DADM) for image dehazing. Guided by the haze density, our DADM can handle images with dense haze and complex environments. Specifically, we introduce a density-aware dehazing network (DADNet) in the reverse diffusion process, which can help DADM gradually recover a clear haze-free image from a haze image. To improve the performance of the network, we design a cross-feature density extraction module (CDEModule) to extract the haze density for the image and a density-guided feature fusion block (DFFBlock) to learn the effective contextual features. Furthermore, we introduce an indirect sampling strategy in the test sampling process, which not only suppresses the accumulation of errors but also ensures the stability of the results. Extensive experiments on popular benchmarks validate the superior performance of the proposed method. The code is released in https://github.com/benchacha/DADM.Item Facial Image Shadow Removal via Graph-based Feature Fusion(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhang, Ling; Chen, Ben; Liu, Zheng; Xiao, Chunxia; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Despite natural image shadow removal methods have made significant progress, they often perform poorly for facial image due to the unique features of the face. Moreover, most learning-based methods are designed based on pixel-level strategies, ignoring the global contextual relationship in the image. In this paper, we propose a graph-based feature fusion network (GraphFFNet) for facial image shadow removal. We apply a graph-based convolution encoder (GCEncoder) to extract global contextual relationships between regions in the coarse shadow-less image produced by an image flipper. Then, we introduce a feature modulation module to fuse the global topological relation onto the image features, enhancing the feature representation of the network. Finally, the fusion decoder integrates all the effective features to reconstruct the image features, producing a satisfactory shadow-removal result. Experimental results demonstrate the superiority of the proposed GraphFFNet over the state-of-the-art and validate the effectiveness of facial image shadow removal.Item Frequency-Aware Facial Image Shadow Removal through Skin Color and Texture Learning(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Ling; Xie, Wenyang; Xiao, Chunxia; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyExisting facial image shadow removal methods predominantly rely on pre-extracted facial features. However, these methods often fail to capitalize on the full potential of these features, resorting to simplified utilization. Furthermore, they tend to overlook the importance of low-frequency information during the extraction of prior features, which can be easily compromised by noises. In our work, we propose a frequency-aware shadow removal network (FSRNet) for facial image shadow removal, which utilizes the skin color and texture information in the face to help recover illumination in shadow regions. Our FSRNet uses a frequencydomain image decomposition network to extract the low-frequency skin color map and high-frequency texture map from the face images, and applies a color-texture guided shadow removal network to produce final shadow removal result. Concretely, the designed fourier sparse attention block (FSABlock) can transform images from the spatial domain to the frequency domain and help the network focus on the key information. We also introduce a skin color fusion module (CFModule) and a texture fusion module (TFModule) to enhance the understanding and utilization of color and texture features, promoting high-quality result without color distortion and detail blurring. Extensive experiments demonstrate the superiority of the proposed method. The code is available at https://github.com/laoxie521/FSRNet.Item Luminance Attentive Networks for HDR Image and Panorama Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2021) Yu, Hanning; Liu, Wentao; Long, Chengjiang; Dong, Bo; Zou, Qin; Xiao, Chunxia; Zhang, Fang-Lue and Eisemann, Elmar and Singh, KaranIt is very challenging to reconstruct a high dynamic range (HDR) from a low dynamic range (LDR) image as an ill-posed problem. This paper proposes a luminance attentive network named LANet for HDR reconstruction from a single LDR image. Our method is based on two fundamental observations: (1) HDR images stored in relative luminance are scale-invariant, which means the HDR images will hold the same information when multiplied by any positive real number. Based on this observation, we propose a novel normalization method called " HDR calibration " for HDR images stored in relative luminance, calibrating HDR images into a similar luminance scale according to the LDR images. (2) The main difference between HDR images and LDR images is in under-/over-exposed areas, especially those highlighted. Following this observation, we propose a luminance attention module with a two-stream structure for LANet to pay more attention to the under-/over-exposed areas. In addition, we propose an extended network called panoLANet for HDR panorama reconstruction from an LDR panorama and build a dualnet structure for panoLANet to solve the distortion problem caused by the equirectangular panorama. Extensive experiments show that our proposed approach LANet can reconstruct visually convincing HDR images and demonstrate its superiority over state-of-the-art approaches in terms of all metrics in inverse tone mapping. The image-based lighting application with our proposed panoLANet also demonstrates that our method can simulate natural scene lighting using only LDR panorama. Our source code is available at https://github.com/LWT3437/LANet.Item Pyramid Multi-View Stereo with Local Consistency(The Eurographics Association and John Wiley & Sons Ltd., 2019) Liao, Jie; Fu, Yanping; Yan, Qingan; Xiao, Chunxia; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonIn this paper, we propose a PatchMatch-based Multi-View Stereo (MVS) algorithm which can efficiently estimate geometry for the textureless area. Conventional PatchMatch-based MVS algorithms estimate depth and normal hypotheses mainly by optimizing photometric consistency metrics between patch in the reference image and its projection on other images. The photometric consistency works well in textured regions but can not discriminate textureless regions, which makes geometry estimation for textureless regions hard work. To address this issue, we introduce the local consistency. Based on the assumption that neighboring pixels with similar colors likely belong to the same surface and share approximate depth-normal values, local consistency guides the depth and normal estimation with geometry from neighboring pixels with similar colors. To fasten the convergence of pixelwise local consistency across the image, we further introduce a pyramid architecture similar to previous work which can also provide coarse estimation at upper levels. We validate the effectiveness of our method on the ETH3D benchmark and Tanks and Temples benchmark. Results show that our method outperforms the state-of-the-art.Item Scale-adaptive Structure-preserving Texture Filtering(The Eurographics Association and John Wiley & Sons Ltd., 2019) Song, Chengfang; Xiao, Chunxia; Lei, Ling; Sui, Haigang; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonThis paper proposes a scale-adaptive filtering method to improve the performance of structure-preserving texture filtering for image smoothing. With classical texture filters, it usually is challenging to smooth texture at multiple scales while preserving salient structures in an image. We address this issue in the concept of adaptive bilateral filtering, where the scales of Gaussian range kernels are allowed to vary from pixel to pixel. Based on direction-wise statistics, our method distinguishes texture from structure effectively, identifies appropriate scope around a pixel to be smoothed and thus infers an optimal smoothing scale for it. Filtering an image with varying-scale kernels, the image is smoothed according to the distribution of texture adaptively. With commendable experimental results, we show that, needing less iterations, our proposed scheme boosts texture filtering performance in terms of preserving the geometric structures of multiple scales even after aggressive smoothing of the original image.Item Wavelet Flow: Optical Flow Guided Wavelet Facial Image Fusion(The Eurographics Association and John Wiley & Sons Ltd., 2019) Ding, Hong; Yan, Qingan; Fu, Gang; Xiao, Chunxia; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonEstimating the correspondence between the images using optical flow is the key component for image fusion, however, computing optical flow between a pair of facial images including backgrounds is challenging due to large differences in illumination, texture, color and background in the images. To improve optical flow results for image fusion, we propose a novel flow estimation method, wavelet flow, which can handle both the face and background in the input images. The key idea is that instead of computing flow directly between the input image pair, we estimate the image flow by incorporating multi-scale image transfer and optical flow guided wavelet fusion. Multi-scale image transfer helps to preserve the background and lighting detail of input, while optical flow guided wavelet fusion produces a series of intermediate images for further fusion quality optimizing. Our approach can significantly improve the performance of the optical flow algorithm and provide more natural fusion results for both faces and backgrounds in the images. We evaluate our method on a variety of datasets to show its high outperformance.