Browsing by Author "Guo, Baining"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Deep Reflectance Scanning: Recovering Spatially‐varying Material Appearance from a Flash‐lit Video Sequence(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Ye, Wenjie; Dong, Yue; Peers, Pieter; Guo, Baining; Benes, Bedrich and Hauser, HelwigIn this paper we present a novel method for recovering high‐resolution spatially‐varying isotropic surface reflectance of a planar exemplar from a flash‐lit close‐up video sequence captured with a regular hand‐held mobile phone. We do not require careful calibration of the camera and lighting parameters, but instead compute a per‐pixel flow map using a deep neural network to align the input video frames. For each video frame, we also extract the reflectance parameters, and warp the neural reflectance features directly using the per‐pixel flow, and subsequently pool the warped features. Our method facilitates convenient hand‐held acquisition of spatially‐varying surface reflectance with commodity hardware by non‐expert users. Furthermore, our method enables aggregation of reflectance features from surface points visible in only a subset of the captured video frames, enabling the creation of high‐resolution reflectance maps that exceed the native camera resolution. We demonstrate and validate our method on a variety of synthetic and real‐world spatially‐varying materials.Item Learning and Exploring Motor Skills with Spacetime Bounds(The Eurographics Association and John Wiley & Sons Ltd., 2021) Ma, Li-Ke; Yang, Zeshi; Tong, Xin; Guo, Baining; Yin, KangKang; Mitra, Niloy and Viola, IvanEquipping characters with diverse motor skills is the current bottleneck of physics-based character animation. We propose a Deep Reinforcement Learning (DRL) framework that enables physics-based characters to learn and explore motor skills from reference motions. The key insight is to use loose space-time constraints, termed spacetime bounds, to limit the search space in an early termination fashion. As we only rely on the reference to specify loose spacetime bounds, our learning is more robust with respect to low quality references. Moreover, spacetime bounds are hard constraints that improve learning of challenging motion segments, which can be ignored by imitation-only learning. We compare our method with state-of-the-art tracking-based DRL methods. We also show how to guide style exploration within the proposed framework.Item Towards Robust Direction Invariance in Character Animation(The Eurographics Association and John Wiley & Sons Ltd., 2019) Ma, Li-Ke; Yang, Zeshi; Guo, Baining; Yin, KangKang; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonIn character animation, direction invariance is a desirable property. That is, a pose facing north and the same pose facing south are considered the same; a character that can walk to the north is expected to be able to walk to the south in a similar style. To achieve such direction invariance, the current practice is to remove the facing direction's rotation around the vertical axis before further processing. Such a scheme, however, is not robust for rotational behaviors in the sagittal plane. In search of a smooth scheme to achieve direction invariance, we prove that in general a singularity free scheme does not exist. We further connect the problem with the hairy ball theorem, which is better-known to the graphics community. Due to the nonexistence of a singularity free scheme, a general solution does not exist and we propose a remedy by using a properly-chosen motion direction that can avoid singularities for specific motions at hand. We perform comparative studies using two deep-learning based methods, one builds kinematic motion representations and the other learns physics-based controls. The results show that with our robust direction invariant features, both methods can achieve better results in terms of learning speed and/or final quality. We hope this paper can not only boost performance for character animation methods, but also help related communities currently not fully aware of the direction invariance problem to achieve more robust results.