Browsing by Author "Huang, Hui"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item 4D Reconstruction of Blooming Flowers(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Zheng, Qian; Fan, Xiaochen; Gong, Minglun; Sharf, Andrei; Deussen, Oliver; Huang, Hui; Chen, Min and Zhang, Hao (Richard)Flower blooming is a beautiful phenomenon in nature as flowers open in an intricate and complex manner whereas petals bend, stretch and twist under various deformations. Flower petals are typically thin structures arranged in tight configurations with heavy self‐occlusions. Thus, capturing and reconstructing spatially and temporally coherent sequences of blooming flowers is highly challenging. Early in the process only exterior petals are visible and thus interior parts will be completely missing in the captured data. Utilizing commercially available 3D scanners, we capture the visible parts of blooming flowers into a sequence of 3D point clouds. We reconstruct the flower geometry and deformation over time using a template‐based dynamic tracking algorithm. To track and model interior petals hidden in early stages of the blooming process, we employ an adaptively constrained optimization. Flower characteristics are exploited to track petals both forward and backward in time. Our methods allow us to faithfully reconstruct the flower blooming process of different species. In addition, we provide comparisons with state‐of‐the‐art physical simulation‐based approaches and evaluate our approach by using photos of captured real flowers.Flower blooming is a beautiful phenomenon in nature as flowers open in an intricate and complex manner whereas petals bend, stretch and twist under various deformations. Flower petals are typically thin structures arranged in tight configurations with heavy self‐occlusions. Thus, capturing and reconstructing spatially and temporally coherent sequences of blooming flowers is highly challenging. Early in the process only exterior petals are visible and thus interior parts will be completely missing in the captured data. Utilizing commercially available 3D scanners, we capture the visible parts of blooming flowers into a sequence of 3D point clouds. We reconstruct the flower geometry and deformation over time using a template‐based dynamic tracking algorithm. To track and model interior petals hidden in early stages of the blooming process, we employ an adaptively constrained optimization. Flower characteristics are exploited to track petals both forward and backward in time. Our methods allow us to faithfully reconstruct the flower blooming process of different species.Item Active Scene Understanding via Online Semantic Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2019) Zheng, Lintao; Zhu, Chenyang; Zhang, Jiazhao; Zhao, Hang; Huang, Hui; Niessner, Matthias; Xu, Kai; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe propose a novel approach to robot-operated active understanding of unknown indoor scenes, based on online RGBD reconstruction with semantic segmentation. In our method, the exploratory robot scanning is both driven by and targeting at the recognition and segmentation of semantic objects from the scene. Our algorithm is built on top of a volumetric depth fusion framework and performs real-time voxel-based semantic labeling over the online reconstructed volume. The robot is guided by an online estimated discrete viewing score field (VSF) parameterized over the 3D space of 2D location and azimuth rotation. VSF stores for each grid the score of the corresponding view, which measures how much it reduces the uncertainty (entropy) of both geometric reconstruction and semantic labeling. Based on VSF, we select the next best views (NBV) as the target for each time step. We then jointly optimize the traverse path and camera trajectory between two adjacent NBVs, through maximizing the integral viewing score (information gain) along path and trajectory. Through extensive evaluation, we show that our method achieves efficient and accurate online scene parsing during exploratory scanning.Item Laplace–Beltrami Operator on Point Clouds Based on Anisotropic Voronoi Diagram(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Qin, Hongxing; Chen, Yi; Wang, Yunhai; Hong, Xiaoyang; Yin, Kangkang; Huang, Hui; Chen, Min and Benes, BedrichThe symmetrizable and converged Laplace–Beltrami operator () is an indispensable tool for spectral geometrical analysis of point clouds. The , introduced by Liu et al. [LPG12] is guaranteed to be symmetrizable, but its convergence degrades when it is applied to models with sharp features. In this paper, we propose a novel , which is not only symmetrizable but also can handle the point‐sampled surface containing significant sharp features. By constructing the anisotropic Voronoi diagram in the local tangential space, the can be well constructed for any given point. To compute the area of anisotropic Voronoi cell, we introduce an efficient approximation by projecting the cell to the local tangent plane and have proved its convergence. We present numerical experiments that clearly demonstrate the robustness and efficiency of the proposed for point clouds that may contain noise, outliers, and non‐uniformities in thickness and spacing. Moreover, we can show that its spectrum is more accurate than the ones from existing for scan points or surfaces with sharp features.The symmetrizable and converged Laplace–Beltrami operator () is an indispensable tool for spectral geometrical analysis of point clouds. The , introduced by Liu et al. [LPG12] is guaranteed to be symmetrizable, but its convergence degrades when it is applied to models with sharp features. In this paper, we propose a novel , which is not only symmetrizable but also can handle the point‐sampled surface containing significant sharp features. By constructing the anisotropic Voronoi diagram in the local tangential space, the can be well constructed for any given point. To compute the area of anisotropic Voronoi cell, we introduce an efficient approximation by projecting the cell to the local tangent plane and have proved its convergence. We present numerical experiments that clearly demonstrate the robustness and efficiency of the proposed for point clouds that may contain noise, outliers, and non‐uniformities in thickness and spacing.Item Learning Elastic Constitutive Material and Damping Models(The Eurographics Association and John Wiley & Sons Ltd., 2020) Wang, Bin; Deng, Yuanmin; Kry, Paul; Ascher, Uri; Huang, Hui; Chen, Baoquan; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueCommonly used linear and nonlinear constitutive material models in deformation simulation contain many simplifications and only cover a tiny part of possible material behavior. In this work we propose a framework for learning customized models of deformable materials from example surface trajectories. The key idea is to iteratively improve a correction to a nominal model of the elastic and damping properties of the object, which allows new forward simulations with the learned correction to more accurately predict the behavior of a given soft object. Space-time optimization is employed to identify gentle control forces with which we extract necessary data for model inference and to finally encapsulate the material correction into a compact parametric form. Furthermore, a patch based position constraint is proposed to tackle the challenge of handling incomplete and noisy observations arising in real-world examples. We demonstrate the effectiveness of our method with a set of synthetic examples, as well with data captured from real world homogeneous elastic objects.Item Point Pattern Synthesis via Irregular Convolution(The Eurographics Association and John Wiley & Sons Ltd., 2019) Tu, Peihan; Lischinski, Dani; Huang, Hui; Bommes, David and Huang, HuiPoint pattern synthesis is a fundamental tool with various applications in computer graphics. To synthesize a point pattern, some techniques have taken an example-based approach, where the user provides a small exemplar of the target pattern. However, it remains challenging to synthesize patterns that faithfully capture the structures in the given exemplar. In this paper, we present a new example-based point pattern synthesis method that preserves both local and non-local structures present in the exemplar. Our method leverages recent neural texture synthesis techniques that have proven effective in synthesizing structured textures. The network that we present is end-to-end. It utilizes an irregular convolution layer, which converts a point pattern into a gridded feature map, to directly optimize point coordinates. The synthesis is then performed by matching inter- and intra-correlations of the responses produced by subsequent convolution layers. We demonstrate that our point pattern synthesis qualitatively outperforms state-of-the-art methods on challenging structured patterns, and enables various graphical applications, such as object placement in natural scenes, creative element patterns or realistic urban layouts in a 3D virtual environment.Item Symposium on Geometry Processing 2019 - CGF38-5: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2019) Bommes, David; Huang, Hui; Bommes, David and Huang, HuiItem Symposium on Geometry Processing 2019 – Posters: Frontmatter(Eurographics Association, 2019) Bommes, David; Huang, Hui; Bommes, David and Huang, HuiItem Uncut Aerial Video via a Single Sketch(The Eurographics Association and John Wiley & Sons Ltd., 2018) Yang, Hao; Xie, Ke; Huang, Shengqiu; Huang, Hui; Fu, Hongbo and Ghosh, Abhijeet and Kopf, JohannesNowadays UAV filming is getting popular, more and more stunning aerial videos appearing online. Nonetheless, making a good uncut aerial video with only one-long-shot for the large-scale outdoor scenes is still quite challenging, no many eye-catching pieces available yet. It requires users to have both consummate drone controlling skill and good perception of filming aesthetics. If totally manual, the user has to simultaneously adjust the drone position and the mounted camera orientation during the whole flyby while trying to keep all operation changes executed smoothly. Recent research has proposed a number of planning tools for automatic or semi-automatic aerial videography, however, most requires rather complex user inputs and heavy computations. In this paper, we propose a user-friendly system designed to simplify the input and automatically generate continuous camera moves to capture compelling aerial videos that users prefer to see without any post cutting or editing. Assume a rough 2.5D scene model that includes all the regions of interest are available, users are only required to casually draw a single sketch on the 2D map. Our system will analyze this rough sketch input, compute the corresponding quality views in 3D safe flying zone, and then create a globally optimal camera trajectory passing through regions of user interest via solving a combinatorial problem. At end, we optimize the drone flying speed locally to make the resulting aerial videos more visually pleasing.Item UprightRL: Upright Orientation Estimation of 3D Shapes via Reinforcement Learning(The Eurographics Association and John Wiley & Sons Ltd., 2021) Chen, Luanmin; Xu, Juzhan; Wang, Chuan; Huang, Haibin; Huang, Hui; Hu, Ruizhen; Zhang, Fang-Lue and Eisemann, Elmar and Singh, KaranIn this paper, we study the problem of 3D shape upright orientation estimation from the perspective of reinforcement learning, i.e. we teach a machine (agent) to orientate 3D shapes step by step to upright given its current observation. Unlike previous methods, we take this problem as a sequential decision-making process instead of a strong supervised learning problem. To achieve this, we propose UprightRL, a deep network architecture designed for upright orientation estimation. UprightRL mainly consists of two submodules: an Actor module and a Critic module which can be learned with a reinforcement learning manner. Specifically, the Actor module selects an action from the action space to perform a point cloud transformation and obtain the new point cloud for the next environment state, while the Critic module evaluates the strategy and guides the Actor to choose the next stage action. Moreover, we design a reward function that encourages the agent to select action which is conducive to orient model towards upright orientation with a positive reward and negative otherwise. We conducted extensive experiments to demonstrate the effectiveness of the proposed model, and experimental results show that our network outperforms the stateof- the-art. We also apply our method to the robot grasping-and-placing experiment, to reveal the practicability of our method.