Browsing by Author "Xiao, Chunxia"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Shadow Inpainting and Removal Using Generative Adversarial Networks with Slice Convolutions(The Eurographics Association and John Wiley & Sons Ltd., 2019) Wei, Jinjiang; Long, Chengjiang; Zou, Hua; Xiao, Chunxia; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonIn this paper, we propose a two-stage top-down and bottom-up Generative Adversarial Networks (TBGANs) for shadow inpainting and removal which uses a novel top-down encoder and a bottom-up decoder with slice convolutions. These slice convolutions can effectively extract and restore the long-range spatial information for either down-sampling or up-sampling. Different from the previous shadow removal methods based on deep learning, we propose to inpaint shadow to handle the possible dark shadows to achieve a coarse shadow-removal image at the first stage, and then further recover the details and enhance the color and texture details with a non-local block to explore both local and global inter-dependencies of pixels at the second stage. With such a two-stage coarse-to-fine processing, the overall effect of shadow removal is greatly improved, and the effect of color retention in non-shaded areas is significant. By comparing with a variety of mainstream shadow removal methods, we demonstrate that our proposed method outperforms the state-of-the-art methods.Item Specular Highlight Removal for Real-world Images(The Eurographics Association and John Wiley & Sons Ltd., 2019) Fu, Gang; Zhang, Qing; Song, Chengfang; Lin, Qifeng; Xiao, Chunxia; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonRemoving specular highlight in an image is a fundamental research problem in computer vision and computer graphics. While various methods have been proposed, they typically do not work well for real-world images due to the presence of rich textures, complex materials, hard shadows, occlusions and color illumination, etc. In this paper, we present a novel specular highlight removal method for real-world images. Our approach is based on two observations of the real-world images: (i) the specular highlight is often small in size and sparse in distribution; (ii) the remaining diffuse image can be represented by linear com- bination of a small number of basis colors with the sparse encoding coefficients. Based on the two observations, we design an optimization framework for simultaneously estimating the diffuse and specular highlight images from a single image. Specif- ically, we recover the diffuse components of those regions with specular highlight by encouraging the encoding coefficients sparseness using L0 norm. Moreover, the encoding coefficients and specular highlight are also subject to the non-negativity according to the additive color mixing theory and the illumination definition, respectively. Extensive experiments have been performed on a variety of images to validate the effectiveness of the proposed method and its superiority over the previous methods.Item Thin Cloud Removal for Single RGB Aerial Image(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Song, Chengfang; Xiao, Chunxia; Zhang, Yeting; Sui, Haigang; Benes, Bedrich and Hauser, HelwigAcquired above variable clouds, aerial images contain the components of ground reflection and cloud effect. Due to the non‐uniformity, clouds in aerial images are even harder to remove than haze in terrestrial images. This paper proposes a divide‐and‐conquer scheme to remove the thin translucent clouds in a single RGB aerial image. Based on colour attenuation prior, we design a kind of veiling metric that indicates the local concentration of clouds effectively. By this metric, an aerial image containing thickness‐varied clouds is segmented into multiple regions. Each region is veiled by clouds of nearly‐equal concentration, and hence subject to common assumptions, such as boundary constraint on transmission. The atmospheric light in each region is estimated by the modified local colour‐line model and composed into a spatially‐varying airlight map for the entire image. Then scene transmission is estimated and further refined by a weighted ‐norm based contextual regularization. Finally, we recover ground reflection via the atmospheric scattering model. We verify our cloud removal method on a number of aerial images containing thin clouds and compare our results with classical single‐image dehazing methods and the state‐of‐the‐art learning‐based declouding method, respectively.