Browsing by Author "Li, Chen"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item A Differential Diffusion Theory for Participating Media(The Eurographics Association and John Wiley & Sons Ltd., 2023) Cen, Yunchi; Li, Chen; Li, Frederick W. B.; Yang, Bailin; Liang, Xiaohui; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.We present a novel approach to differentiable rendering for participating media, addressing the challenge of computing scene parameter derivatives. While existing methods focus on derivative computation within volumetric path tracing, they fail to significantly improve computational performance due to the expensive computation of multiply-scattered light. To overcome this limitation, we propose a differential diffusion theory inspired by the classical diffusion equation. Our theory enables real-time computation of arbitrary derivatives such as optical absorption, scattering coefficients, and anisotropic parameters of phase functions. By solving derivatives through the differential form of the diffusion equation, our approach achieves remarkable speed gains compared to Monte Carlo methods. This marks the first differentiable rendering framework to compute scene parameter derivatives based on diffusion approximation. Additionally, we derive the discrete form of diffusion equation derivatives, facilitating efficient numerical solutions. Our experimental results using synthetic and realistic images demonstrate the accurate and efficient estimation of arbitrary scene parameter derivatives. Our work represents a significant advancement in differentiable rendering for participating media, offering a practical and efficient solution to compute derivatives while addressing the limitations of existing approaches.Item Rainbow: A Rendering-Aware Index for High-Quality Spatial Scatterplots with Result-Size Budgets(The Eurographics Association, 2022) Bai, Qiushi; Alsudais, Sadeem; Li, Chen; Zhao, Shuang; Bujack, Roxana; Tierny, Julien; Sadlo, FilipWe study the problem of computing a spatial scatterplot on a large dataset for arbitrary zooming/panning queries. We introduce a general framework called ''Rainbow'' that generates a high-quality scatterplot for a given result-size budget. Rainbow augments a spatial index with judiciously selected representative points offline. To answer a query, Rainbow traverses the index top-down and selects representative points with a good quality until the result-size budget is reached. We experimentally demonstrate the effectiveness of Rainbow.Item A Rapid, End‐to‐end, Generative Model for Gaseous Phenomena from Limited Views(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Qiu, Sheng; Li, Chen; Wang, Changbo; Qin, Hong; Benes, Bedrich and Hauser, HelwigDespite the rapid development and proliferation of computer graphics hardware devices for scene capture in the most recent decade, the high‐resolution 3D/4D acquisition of gaseous scenes (e.g., smokes) in real time remains technically challenging in graphics research nowadays. In this paper, we explore a hybrid approach to simultaneously taking advantage of both the model‐centric method and the data‐driven method. Specifically, this paper develops a novel conditional generative model to rapidly reconstruct the temporal density and velocity fields of gaseous phenomena based on the sequence of two projection views. With the data‐driven method, we can achieve the strong coupling of density update and the estimation of flow motion, as a result, we can greatly improve the reconstruction performance for smoke scenes. First, we employ a conditional generative network to generate the initial density field from input projection views and estimate the flow motion based on the adjacent frames. Second, we utilize the differentiable advection layer and design a velocity estimation network with the long‐term mechanism to help achieve the end‐to‐end training and more stable graphics effects. Third, we can re‐simulate the input scene with flexible coupling effects based on the estimated velocity field subject to artists' guidance or user interaction. Moreover, our generative model could accommodate single projection view as input. In practice, more input projection views are enabling and facilitating the high‐fidelity reconstruction with more realistic and finer details. We have conducted extensive experiments to confirm the effectiveness, efficiency, and robustness of our new method compared with the previous state‐of‐the‐art techniques.