Browsing by Author "Li, Chenhui"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Translucent Image Recoloring through Homography Estimation(The Eurographics Association and John Wiley & Sons Ltd., 2018) Huang, Yifei; Wang, Changbo; Li, Chenhui; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesImage color editing techniques are of great significance for users who wish to adjust the image color. However, previous works paid less attention to the translucent images. In this paper, we propose a new method to recolor the translucent images while preserving detailed information and color relationships of the source image. We consider the recolor problem as a location transformation problem and solve it in two steps: automatic palette extraction and homography estimation. First, we propose the Hmeans method to extract the dominant colors of the source image based on histogram statistics and clustering. Then, we propose homography estimation to map the source colors to desired colors in the CIE-LAB color space. Further, we adopt a non-linear optimization approach to improve the result generated by the last step. The proposed method maintains high fidelity of the source image. Experiments have shown that our method generates a state-of-the-art visual result, in particular in the shadow areas. The source images with ground truth generated by a ray tracer further verify the effectiveness of our method.Item VisFM: Visual Analysis of Image Feature Matchings(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Li, Chenhui; Baciu, George; Chen, Min and Benes, BedrichFeature matching is the most basic and pervasive problem in computer vision and it has become a primary component in big data analytics. Many tools have been developed for extracting and matching features in video streams and image frames. However, one of the most basic tools, that is, a tool for simply visualizing matched features for the comparison and evaluation of computer vision algorithms is not generally available, especially when dealing with a large number of matching lines. We introduce VisFM, an integrated visual analysis system for comprehending and exploring image feature matchings. VisFM presents a matching view with an intuitive line bundling to provide useful insights regarding the quality of matched features. VisFM is capable of showing a summarization of the features and matchings through group view to assist domain experts in observing the feature matching patterns from multiple perspectives. VisFM incorporates a series of interactions for exploring the feature data. We demonstrate the visual efficacy of VisFM by applying it to three scenarios. An informal expert feedback, conducted by our collaborator in computer vision, demonstrates how VisFM can be used for comparing and analysing feature matchings when the goal is to improve an image retrieval algorithm.Feature matching is the most basic and pervasive problem in computer vision and it has become a primary component in big data analytics. Many tools have been developed for extracting and matching features in video streams and image frames. However, one of the most basic tools, that is, a tool for simply visualizing matched features for the comparison and evaluation of computer vision algorithms is not generally available, especially when dealing with a large number of matching lines. We introduce VisFM, an integrated visual analysis system for comprehending and exploring image feature matchings. VisFM presents a matching view with an intuitive line bundling to provide useful insights regarding the quality of matched features. VisFM is capable of showing a summarization of the features and matchings through group view to assist domain experts in observing the feature matching patterns from multiple perspectives.Item Visualizing Dynamics of Urban Regions Through a Geo‐Semantic Graph‐Based Method(© 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2020) Wang, Yunzhe; Baciu, George; Li, Chenhui; Benes, Bedrich and Hauser, HelwigIn urban analysis, it is desirable to find regions where a primary socio‐economic activity dominates as a key endeavour. This can be accomplished by aggregating neighbouring locations where similar activities take place. However, people move and their activities change over time. Furthermore, the boundaries of regions are not stationary. Thus, it is challenging to update region divisions and track their evolution. Geo‐textual data embody geographical information and activity descriptions. We obtain changes in regional boundaries by iteratively applying a process to a sequence of latent graphs that are constructed from geo‐textual data. Region characteristics are interpreted by topics learned by the latent Dirichlet allocation model. We also propose a matching algorithm to expose region transformations between different timestamps. Interesting patterns of evolution emerge after clustering the migration trajectories of region centroids. In our visual system, users can explore the evolution of regions through animations and linked snapshots. To facilitate visual comparisons, we represent regions by hexagonal tiling that better construct arbitrary regional shapes. The effectiveness of our method is evaluated on two case studies using real‐world datasets, and a user study shows that our visual analytics system is highly effective in performing studies on such regional maps.