Browsing by Author "Chen, Baoquan"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item Canis: A High-Level Language for Data-Driven Chart Animations(The Eurographics Association and John Wiley & Sons Ltd., 2020) Ge, Tong; Zhao, Yue; Lee, Bongshin; Ren, Donghao; Chen, Baoquan; Wang, Yunhai; Viola, Ivan and Gleicher, Michael and Landesberger von Antburg, TatianaIn this paper, we introduce Canis, a high-level domain-specific language that enables declarative specifications of data-driven chart animations. By leveraging data-enriched SVG charts, its grammar of animations can be applied to the charts created by existing chart construction tools. With Canis, designers can select marks from the charts, partition the selected marks into mark units based on data attributes, and apply animation effects to the mark units, with the control of when the effects start. The Canis compiler automatically synthesizes the Lottie animation JSON files [Aira], which can be rendered natively across multiple platforms. To demonstrate Canis' expressiveness, we present a wide range of chart animations. We also evaluate its scalability by showing the effectiveness of our compiler in reducing the output specification size and comparing its performance on different platforms against D3.Item Deep Video-Based Performance Cloning(The Eurographics Association and John Wiley & Sons Ltd., 2019) Aberman, Kfir; Shi, Mingyi; Liao, Jing; Lischinski, Dani; Chen, Baoquan; Cohen-Or, Daniel; Alliez, Pierre and Pellacini, FabioWe present a new video-based performance cloning technique. After training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, we are able to generate videos where this actor reenacts other performances. All of the training data and the driving performances are provided as ordinary video segments, without motion capture or depth information. Our generative model is realized as a deep neural network with two branches, both of which train the same space-time conditional generator, using shared weights. One branch, responsible for learning to generate the appearance of the target actor in various poses, uses paired training data, self-generated from the reference video. The second branch uses unpaired data to improve generation of temporally coherent video renditions of unseen pose sequences. Through data augmentation, our network is able to synthesize images of the target actor in poses never captured by the reference video. We demonstrate a variety of promising results, where our method is able to generate temporally coherent videos, for challenging scenarios where the reference and driving videos consist of very different dance performances.Item Fabricable Unobtrusive 3D-QR-Codes with Directional Light(The Eurographics Association and John Wiley & Sons Ltd., 2020) Peng, Hao; Liu, Peiqing; Lu, Lin; Sharf, Andrei; Liu, Lin; Lischinski, Dani; Chen, Baoquan; Jacobson, Alec and Huang, QixingQR code is a 2D matrix barcode widely used for product tracking, identification, document management and general marketing. Recently, there have been various attempts to utilize QR codes in 3D manufacturing by carving QR codes on the surface of the printed 3D shape. Nevertheless, significant shape editing and modulation may be required to allow readability of the embedded 3D-QR-codes with good decoding accuracy. In this paper, we introduce a novel QR code 3D fabrication framework aimed at unobtrusive embedding of 3D-QR-codes in the shape hence introducing minimal shape modulation. Essentially, our method computes bi-directional carvings in the 3D shape surface to obtain the black-and-white QR pattern. By using a directional light source, the black-and-white QR pattern emerges as lighted and shadow casted blocks on the shape respectively. To account for minimal modulation and elusiveness, we optimize the QR code carving w.r.t. shape geometry, visual disparity and light source position. Our technique employs a simulation of lighting phenomena through carved modules on the shape to ensure adequate contrast of the printed 3D-QR-code.Item Learning Elastic Constitutive Material and Damping Models(The Eurographics Association and John Wiley & Sons Ltd., 2020) Wang, Bin; Deng, Yuanmin; Kry, Paul; Ascher, Uri; Huang, Hui; Chen, Baoquan; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueCommonly used linear and nonlinear constitutive material models in deformation simulation contain many simplifications and only cover a tiny part of possible material behavior. In this work we propose a framework for learning customized models of deformable materials from example surface trajectories. The key idea is to iteratively improve a correction to a nominal model of the elastic and damping properties of the object, which allows new forward simulations with the learned correction to more accurately predict the behavior of a given soft object. Space-time optimization is employed to identify gentle control forces with which we extract necessary data for model inference and to finally encapsulate the material correction into a compact parametric form. Furthermore, a patch based position constraint is proposed to tackle the challenge of handling incomplete and noisy observations arising in real-world examples. We demonstrate the effectiveness of our method with a set of synthetic examples, as well with data captured from real world homogeneous elastic objects.Item MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary Monocular Cameras(The Eurographics Association and John Wiley & Sons Ltd., 2022) Chen, Xuelin; Li, Weiyu; Cohen-Or, Daniel; Mitra, Niloy J.; Chen, Baoquan; Chaine, Raphaëlle; Kim, Min H.Synthesizing novel views of dynamic humans from stationary monocular cameras is a specialized but desirable setup. This is particularly attractive as it does not require static scenes, controlled environments, or specialized capture hardware. In contrast to techniques that exploit multi-view observations, the problem of modeling a dynamic scene from a single view is significantly more under-constrained and ill-posed. In this paper, we introduce Neural Motion Consensus Flow (MoCo-Flow), a representation that models dynamic humans in stationary monocular cameras using a 4D continuous time-variant function. We learn the proposed representation by optimizing for a dynamic scene that minimizes the total rendering error, over all the observed images. At the heart of our work lies a carefully designed optimization scheme, which includes a dedicated initialization step and is constrained by a motion consensus regularization on the estimated motion flow. We extensively evaluate MoCo-Flow on several datasets that contain human motions of varying complexity, and compare, both qualitatively and quantitatively, to several baselines and ablated variations of our methods, showing the efficacy and merits of the proposed approach. Pretrained model, code, and data will be released for research purposes upon paper acceptance.Item PointSkelCNN: Deep Learning-Based 3D Human Skeleton Extraction from Point Clouds(The Eurographics Association and John Wiley & Sons Ltd., 2020) Qin, Hongxing; Zhang, Songshan; Liu, Qihuang; Chen, Li; Chen, Baoquan; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueA 3D human skeleton plays important roles in human shape reconstruction and human animation. Remarkable advances have been achieved recently in 3D human skeleton estimation from color and depth images via a powerful deep convolutional neural network. However, applying deep learning frameworks to 3D human skeleton extraction from point clouds remains challenging because of the sparsity of point clouds and the high nonlinearity of human skeleton regression. In this study, we develop a deep learning-based approach for 3D human skeleton extraction from point clouds. We convert 3D human skeleton extraction into offset vector regression and human body segmentation via deep learning-based point cloud contraction. Furthermore, a disambiguation strategy is adopted to improve the robustness of joint points regression. Experiments on the public human pose dataset UBC3V and the human point cloud skeleton dataset 3DHumanSkeleton compiled by the authors show that the proposed approach outperforms the state-of-the-art methods.Item Rigid Registration of Point Clouds Based on Partial Optimal Transport(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Qin, Hongxing; Zhang, Yucheng; Liu, Zhentao; Chen, Baoquan; Hauser, Helwig and Alliez, PierreFor rigid point cloud data registration, algorithms based on soft correspondences are more robust than the traditional ICP method and its variants. However, point clouds with severe outliers and missing data may lead to imprecise many‐to‐many correspondences and consequently inaccurate registration. In this study, we propose a point cloud registration algorithm based on partial optimal transport via a hard marginal constraint. The hard marginal constraint provides an explicit parameter to adjust the ratio of points that should be accurately matched, and helps avoid incorrect many‐to‐many correspondences. Experiments show that the proposed method achieves state‐of‐the‐art registration results when dealing with point clouds with significant amount of outliers and missing points (see ).Item Towards a Neural Graphics Pipeline for Controllable Image Generation(The Eurographics Association and John Wiley & Sons Ltd., 2021) Chen, Xuelin; Cohen-Or, Daniel; Chen, Baoquan; Mitra, Niloy J.; Mitra, Niloy and Viola, IvanIn this paper, we leverage advances in neural networks towards forming a neural rendering for controllable image generation, and thereby bypassing the need for detailed modeling in conventional graphics pipeline. To this end, we present Neural Graphics Pipeline (NGP), a hybrid generative model that brings together neural and traditional image formation models. NGP decomposes the image into a set of interpretable appearance feature maps, uncovering direct control handles for controllable image generation. To form an image, NGP generates coarse 3D models that are fed into neural rendering modules to produce view-specific interpretable 2D maps, which are then composited into the final output image using a traditional image formation model. Our approach offers control over image generation by providing direct handles controlling illumination and camera parameters, in addition to control over shape and appearance variations. The key challenge is to learn these controls through unsupervised training that links generated coarse 3D models with unpaired real images via neural and traditional (e.g., Blinn- Phong) rendering functions, without establishing an explicit correspondence between them. We demonstrate the effectiveness of our approach on controllable image generation of single-object scenes. We evaluate our hybrid modeling framework, compare with neural-only generation methods (namely, DCGAN, LSGAN, WGAN-GP, VON, and SRNs), report improvement in FID scores against real images, and demonstrate that NGP supports direct controls common in traditional forward rendering. Code is available at http://geometry.cs.ucl.ac.uk/projects/2021/ngp.Item Tree Branch Level of Detail Models for Forest Navigation(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Zhang, Xiaopeng; Bao, Guanbo; Meng, Weiliang; Jaeger, Marc; Li, Hongjun; Deussen, Oliver; Chen, Baoquan; Chen, Min and Zhang, Hao (Richard)We present a level of detail (LOD) method designed for tree branches. It can be combined with methods for processing tree foliage to facilitate navigation through large virtual forests. Starting from a skeletal representation of a tree, we fit polygon meshes of various densities to the skeleton while the mesh density is adjusted according to the required visual fidelity. For distant models, these branch meshes are gradually replaced with semi‐transparent lines until the tree recedes to a few lines. Construction of these complete LOD models is guided by error metrics to ensure smooth transitions between adjacent LOD models. We then present an instancing technique for discrete LOD branch models, consisting of polygon meshes plus semi‐transparent lines. Line models with different transparencies are instanced on the GPU by merging multiple tree samples into a single model. Our technique reduces the number of draw calls in GPU and increases rendering performance. Our experiments demonstrate that large‐scale forest scenes can be rendered with excellent detail and shadows in real time.We present a level of detail (LOD) method designed for tree branches. It can be combined with methods for processing tree foliage to facilitate navigation through large virtual forests. Starting from a skeletal representation of a tree, we fit polygon meshes of various densities to the skeleton while the mesh density is adjusted according to the required visual fidelity. For distant models, these branch meshes are gradually replaced with semi‐transparent lines until the tree recedes to a few lines. Construction of these complete LOD models is guided by error metrics to ensure smooth transitions between adjacent LOD models. We then present an instancing technique for discrete LOD branch models, consisting of polygon meshes plus semi‐transparent lines.