Browsing by Author "Sun, Zhengxing"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item DFR: Differentiable Function Rendering for Learning 3D Generation from Images(The Eurographics Association and John Wiley & Sons Ltd., 2020) Wu, Yunjie; Sun, Zhengxing; Jacobson, Alec and Huang, QixingLearning-based 3D generation is a popular research field in computer graphics. Recently, some works adapted implicit function defined by a neural network to represent 3D objects and have become the current state-of-the-art. However, training the network requires precise ground truth 3D data and heavy pre-processing, which is unrealistic. To tackle this problem, we propose the DFR, a differentiable process for rendering implicit function representation of 3D objects into 2D images. Briefly, our method is to simulate the physical imaging process by casting multiple rays through the image plane to the function space, aggregating all information along with each ray, and performing a differentiable shading according to every ray's state. Some strategies are also proposed to optimize the rendering pipeline, making it efficient both in time and memory to support training a network. With DFR, we can perform many 3D modeling tasks with only 2D supervision. We conduct several experiments for various applications. The quantitative and qualitative evaluations both demonstrate the effectiveness of our method.Item Progressive 3D Scene Understanding with Stacked Neural Networks(The Eurographics Association, 2018) Song, Youcheng; Sun, Zhengxing; Fu, Hongbo and Ghosh, Abhijeet and Kopf, Johannes3D scene understanding is difficult due to the natural hierarchical structures and complicated contextual relationships in the 3d scenes. In this paper, a progressive 3D scene understanding method is proposed. The scene understanding task is decomposed into several different but related tasks, and semantic objects are progressively separated from coarse to fine. It is achieved by stacking multiple segmentation networks. The former network segments the 3D scene at a coarser level and passes the result as context to the latter one for a finer-grained segmentation. For the network training, we build a connection graph (vertices indicating objects and edges' weights indicating contact area between objects), and calculate a maximum spanning tree to generate coarse-to-fine labels. Then we train the stacked network by hierarchical supervision based on the generated coarseto- fine labels. Finally, using the trained model, we can not only obtain better segmentation accuracy at the finest-grained than directly using the segmentation network, but also obtain a hierarchical understanding result of the 3d scene as a bonus.Item Resolution-switchable 3D Semantic Scene Completion(The Eurographics Association and John Wiley & Sons Ltd., 2022) Luo, Shoutong; Sun, Zhengxing; Sun, Yunhan; Wang, Yi; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtienneSemantic scene completion (SSC) aims to recover the complete geometric structure as well as the semantic segmentation results from partial observations. Previous works could only perform this task at a fixed resolution. To handle this problem, we propose a new method that can generate results at different resolutions without redesigning and retraining. The basic idea is to decouple the direct connection between resolution and network structure. To achieve this, we convert feature volume generated by SSC encoders into a resolution adaptive feature and decode this feature via point. We also design a resolution-adapted point sampling strategy for testing and a category-based point sampling strategy for training to further handle this problem. The encoder of our method can be replaced by existing SSC encoders. We can achieve better results at other resolutions while maintaining the same accuracy as the original resolution results. Code and data are available at https://github.com/lstcutong/ReS-SSC.