Browsing by Author "Sen, Pradeep"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Eurographics Symposium on Rendering 2019 - CGF38-4: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2019) Boubekeur, Tamy; Sen, Pradeep; Boubekeur, Tamy and Sen, PradeepItem Eurographics Symposium on Rendering 2019 – DL-only / Industry Track: Frontmatter(Eurographics Association, 2019) Boubekeur, Tamy; Sen, Pradeep; Boubekeur, Tamy and Sen, PradeepItem Fast and Robust Stochastic Structural Optimization(The Eurographics Association and John Wiley & Sons Ltd., 2020) Cui, Qiaodong; Langlois, Timothy; Sen, Pradeep; Kim, Theodore; Panozzo, Daniele and Assarsson, UlfStochastic structural analysis can assess whether a fabricated object will break under real-world conditions. While this approach is powerful, it is also quite slow, which has previously limited its use to coarse resolutions (e.g., 26x34x28). We show that this approach can be made asymptotically faster, which in practice reduces computation time by two orders of magnitude, and allows the use of previously-infeasible resolutions. We achieve this by showing that the probability gradient can be computed in linear time instead of quadratic, and by using a robust new scheme that stabilizes the inertia gradients used by the optimization. Additionally, we propose a constrained restart method that deals with local minima, and a sheathing approach that further reduces the weight of the shape. Together, these components enable the discovery of previously-inaccessible designs.Item Offline Deep Importance Sampling for Monte Carlo Path Tracing(The Eurographics Association and John Wiley & Sons Ltd., 2019) Bako, Steve; Meyer, Mark; DeRose, Tony; Sen, Pradeep; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonAlthough modern path tracers are successfully being applied to many rendering applications, there is considerable interest to push them towards ever-decreasing sampling rates. As the sampling rate is substantially reduced, however, even Monte Carlo (MC) denoisers-which have been very successful at removing large amounts of noise-typically do not produce acceptable final results. As an orthogonal approach to this, we believe that good importance sampling of paths is critical for producing betterconverged, path-traced images at low sample counts that can then, for example, be more effectively denoised. However, most recent importance-sampling techniques for guiding path tracing (an area known as ''path guiding'') involve expensive online (per-scene) training and offer benefits only at high sample counts. In this paper, we propose an offline, scene-independent deeplearning approach that can importance sample first-bounce light paths for general scenes without the need of the costly online training, and can start guiding path sampling with as little as 1 sample per pixel. Instead of learning to ''overfit'' to the sampling distribution of a specific scene like most previous work, our data-driven approach is trained a priori on a set of training scenes on how to use a local neighborhood of samples with additional feature information to reconstruct the full incident radiance at a point in the scene, which enables first-bounce importance sampling for new test scenes. Our solution is easy to integrate into existing rendering pipelines without the need for retraining, as we demonstrate by incorporating it into both the Blender/Cycles and Mitsuba path tracers. Finally, we show how our offline, deep importance sampler (ODIS) increases convergence at low sample counts and improves the results of an off-the-shelf denoiser relative to other state-of-the-art sampling techniques.Item A Phase‐Based Approach for Animating Images Using Video Examples(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Prashnani, Ekta; Noorkami, Maneli; Vaquero, Daniel; Sen, Pradeep; Chen, Min and Zhang, Hao (Richard)We present a novel approach for animating static images that contain objects that move in a subtle, stochastic fashion (e.g. rippling water, swaying trees, or flickering candles). To do this, our algorithm leverages example videos of similar objects, supplied by the user. Unlike previous approaches which estimate motion fields in the example video to transfer motion into the image, a process which is brittle and produces artefacts, we propose an Eulerian approach which uses the phase information from the sample video to animate the static image. As is well known, phase variations in a signal relate naturally to the displacement of the signal via the Fourier Shift Theorem. To enable local and spatially varying motion analysis, we analyse phase changes in a complex steerable pyramid of the example video. These phase changes are then transferred to the corresponding spatial sub‐bands of the input image to animate it. We demonstrate that this simple, phase‐based approach for transferring small motion is more effective at animating still images than methods which rely on optical flow.We present a novel approach for animating static images that contain objects that move in a subtle, stochastic fashion (e.g. rippling water, swaying trees, or flickering candles). To do this, our algorithm leverages example videos of similar objects, supplied by the user. Unlike previous approaches which estimate motion fields in the example video to transfer motion into the image, a process which is brittle and produces artefacts, we propose an Eulerian approach which uses the phase information from the sample video to animate the static image. As is well known, phase variations in a signal relate naturally to the displacement of the signal via the Fourier Shift Theorem. To enable local and spatially varying motion analysis, we analyse phase changes in a complex steerable pyramid of the example video.