Browsing by Author "Kanai, Takashi"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Digital Terrain Model From UAV Photogrammetric Data(The Eurographics Association, 2020) Morel, Jules; Bac, Alexandra; Kanai, Takashi; Biasotti, Silvia and Pintus, Ruggero and Berretti, StefanoThis paper presents a method designed to finely approximate ground surfaces from UAV photogrammetric point clouds by relying on statistical filters to separate vegetation from potential ground points, dividing the whole plot in similar complexity sub-plots through an optimized tilling, and filling holes by blending multiple local approximations through the partition of unity principle. Experiments on very different terrain topology show that our approach leads to significant improvement over the state-of-the-art method.Item An Energy-Conserving Hair Shading Model Based on Neural Style Transfer(The Eurographics Association, 2020) Qiao, Zhi; Kanai, Takashi; Lee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, BurkhardWe present a novel approach for shading photorealistic hair animation, which is the essential visual element for depicting realistic hairs of virtual characters. Our model is able to shade high-quality hairs quickly by extending the conditional Generative Adversarial Networks. Furthermore, our method is much faster than the previous onerous rendering algorithms and produces fewer artifacts than other neural image translation methods. In this work, we provide a novel energy-conserving hair shading model, which retains the vast majority of semi-transparent appearances and exactly produces the interaction with lights of the scene. Our method is effortless to implement, faster and computationally more efficient than previous algorithms.Item MultiResGNet: Approximating Nonlinear Deformation via Multi-Resolution Graphs(The Eurographics Association and John Wiley & Sons Ltd., 2021) Li, Tianxing; Shi, Rui; Kanai, Takashi; Mitra, Niloy and Viola, IvanThis paper presents a graph-learning-based, powerfully generalized method for automatically generating nonlinear deformation for characters with an arbitrary number of vertices. Large-scale character datasets with a significant number of poses are normally required for training to learn such automatic generalization tasks. There are two key contributions that enable us to address this challenge while making our network generalized to achieve realistic deformation approximation. First, after the automatic linear-based deformation step, we encode the roughly deformed meshes by constructing graphs where we propose a novel graph feature representation method with three descriptors to represent meshes of arbitrary characters in varying poses. Second, we design a multi-resolution graph network (MultiResGNet) that takes the constructed graphs as input, and end-to-end outputs the offset adjustments of each vertex. By processing multi-resolution graphs, general features can be better extracted, and the network training no longer heavily relies on large amounts of training data. Experimental results show that the proposed method achieves better performance than prior studies in deformation approximation for unseen characters and poses.