Browsing by Author "Liu, Yang"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Detection of Impurities in Wool Based on Improved YOlOV8(The Eurographics Association, 2023) Liu, Yang; Ji, Yatu; Ren, Qing Dao Er Ji; Shi, Bao; Zhuang, Xufei; Yao, Miaomiao; Li, Xiaomei; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.In the current production process of wool products, the cleaning of wool raw materials has been realized in an automated way. However, detecting whether the washed and dried wool still contains excessive impurities still requires manual testing. This method greatly reduces production efficiency. To solve the problem of detecting wool impurities, we propose an improved model based on YOLOv8. Our work applied some techniques to solve the low resource model training problem, and incorporated a block for small object detection into the new neural network structure. The newly proposed model achieved an accuracy of 84.3% on the self built dataset and also achieved good results on the VisDrone2019 dataset.Item Latent Space Cartography: Visual Analysis of Vector Space Embeddings(The Eurographics Association and John Wiley & Sons Ltd., 2019) Liu, Yang; Jun, Eunice; Li, Qisheng; Heer, Jeffrey; Gleicher, Michael and Viola, Ivan and Leitte, HeikeLatent spaces-reduced-dimensionality vector space embeddings of data, fit via machine learning-have been shown to capture interesting semantic properties and support data analysis and synthesis within a domain. Interpretation of latent spaces is challenging because prior knowledge, sometimes subtle and implicit, is essential to the process. We contribute methods for ''latent space cartography'', the process of mapping and comparing meaningful semantic dimensions within latent spaces. We first perform a literature survey of relevant machine learning, natural language processing, and scientific research to distill common tasks and propose a workflow process. Next, we present an integrated visual analysis system for supporting this workflow, enabling users to discover, define, and verify meaningful relationships among data points, encoded within latent space dimensions. Three case studies demonstrate how users of our system can compare latent space variants in image generation, challenge existing findings on cancer transcriptomes, and assess a word embedding benchmark.Item SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation(The Eurographics Association and John Wiley & Sons Ltd., 2022) Zheng, Xinyang; Liu, Yang; Wang, Pengshuai; Tong, Xin; Campen, Marcel; Spagnuolo, MichelaWe present a StyleGAN2-based deep learning approach for 3D shape generation, called SDF-StyleGAN, with the aim of reducing visual and geometric dissimilarity between generated shapes and a shape collection. We extend StyleGAN2 to 3D generation and utilize the implicit signed distance function (SDF) as the 3D shape representation, and introduce two novel global and local shape discriminators that distinguish real and fake SDF values and gradients to significantly improve shape geometry and visual quality. We further complement the evaluation metrics of 3D generative models with the shading-image-based Fréchet inception distance (FID) scores to better assess visual quality and shape distribution of the generated shapes. Experiments on shape generation demonstrate the superior performance of SDF-StyleGAN over the state-of-the-art. We further demonstrate the efficacy of SDFStyleGAN in various tasks based on GAN inversion, including shape reconstruction, shape completion from partial point clouds, single-view image-based shape generation, and shape style editing. Extensive ablation studies justify the efficacy of our framework design. Our code and trained models are available at https://github.com/Zhengxinyang/SDF-StyleGAN.Item Semantics-guided Generative Diffusion Model with a 3DMM Model Condition for Face Swapping(The Eurographics Association and John Wiley & Sons Ltd., 2023) Liu, Xiyao; Liu, Yang; Zheng, Yuhao; Yang, Ting; Zhang, Jian; Wang, Victoria; Fang, Hui; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Face swapping is a technique that replaces a face in a target media with another face of a different identity from a source face image. Currently, research on the effective utilisation of prior knowledge and semantic guidance for photo-realistic face swapping remains limited, despite the impressive synthesis quality achieved by recent generative models. In this paper, we propose a novel conditional Denoising Diffusion Probabilistic Model (DDPM) enforced by a two-level face prior guidance. Specifically, it includes (i) an image-level condition generated by a 3D Morphable Model (3DMM), and (ii) a high-semantic level guidance driven by information extracted from several pre-trained attribute classifiers, for high-quality face image synthesis. Although swapped face image from 3DMM does not achieve photo-realistic quality on its own, it provides a strong image-level prior, in parallel with high-level face semantics, to guide the DDPM for high fidelity image generation. The experimental results demonstrate that our method outperforms state-of-the-art face swapping methods on benchmark datasets in terms of its synthesis quality, and capability to preserve the target face attributes and swap the source face identity.Item Surface Fairing towards Regular Principal Curvature Line Networks(The Eurographics Association and John Wiley & Sons Ltd., 2019) Chu, Lei; Bo, Pengbo; Liu, Yang; Wang, Wenping; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonFreeform surfaces whose principal curvature line network is regularly distributed, are essential to many real applications like CAD modeling, architecture design, and industrial fabrication. However, most designed surfaces do not hold this nice property because it is hard to enforce such constraints in the design process. In this paper, we present a novel method for surface fairing which takes a regular distribution of the principal curvature line network on a surface as an objective. Our method first removes the high-frequency signals from the curvature tensor field of an input freeform surface by a novel rolling guidance tensor filter, which results in a more regular and smooth curvature tensor field, then deforms the input surface to match the smoothed field as much as possible. As an application, we solve the problem of approximating freeform surfaces with regular principal curvature line networks, discretized by quadrilateral meshes. By introducing the circular or conical conditions on the quadrilateral mesh to guarantee the existence of discrete principal curvature line networks, and minimizing the approximate error to the original surface and improving the fairness of the quad mesh, we obtain a regular discrete principal curvature line network that approximates the original surface. We evaluate the efficacy of our method on various freeform surfaces and demonstrate the superiority of the rolling guidance tensor filter over other tensor smoothing techniques. We also utilize our method to generate high-quality circular/conical meshes for architecture design and cyclide spline surfaces for CAD modeling.Item A Survey of Tasks and Visualizations in Multiverse Analysis Reports(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2022) Hall, Brian D.; Liu, Yang; Jansen, Yvonne; Dragicevic, Pierre; Chevalier, Fanny; Kay, Matthew; Hauser, Helwig and Alliez, PierreAnalysing data from experiments is a complex, multi‐step process, often with multiple defensible choices available at each step. While analysts often report a single analysis without documenting how it was chosen, this can cause serious transparency and methodological issues. To make the sensitivity of analysis results to analytical choices transparent, some statisticians and methodologists advocate the use of ‘multiverse analysis’: reporting the full range of outcomes that result from all combinations of defensible analytic choices. Summarizing this combinatorial explosion of statistical results presents unique challenges; several approaches to visualizing the output of multiverse analyses have been proposed across a variety of fields (e.g. psychology, statistics, economics, neuroscience). In this article, we (1) introduce a consistent conceptual framework and terminology for multiverse analyses that can be applied across fields; (2) identify the tasks researchers try to accomplish when visualizing multiverse analyses and (3) classify multiverse visualizations into ‘archetypes’, assessing how well each archetype supports each task. Our work sets a foundation for subsequent research on developing visualization tools and techniques to support multiverse analysis and its reporting.