Browsing by Author "Eisemann, Martin"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Game-based Transformations: A Playful Approach to Learning Transformations in Computer Graphics(The Eurographics Association, 2023) Eisemann, Martin; Magana, Alejandra; Zara, JiriIn this paper, we present a playful and game-based learning approach to teaching transformations in a second-year undergraduate computer graphics course. While the theoretical concepts were taught in class, the exercise consists of two web-based tools that help the students to get a playful grasp on the complex topic, which is the foundation for many of the later concepts typically taught in computer graphics, such as the rendering pipeline, animation, camera motion, shadow mapping and many more. The final students' projects and feedback indicate that the game-based introduction was well-received by the students.Item Immersive Free‐Viewpoint Panorama Rendering from Omnidirectional Stereo Video(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Mühlhausen, Moritz; Kappel, Moritz; Kassubeck, Marc; Wöhler, Leslie; Grogorick, Steve; Castillo, Susana; Eisemann, Martin; Magnor, Marcus; Hauser, Helwig and Alliez, PierreIn this paper, we tackle the challenging problem of rendering real‐world 360° panorama videos that support full 6 degrees‐of‐freedom (DoF) head motion from a prerecorded omnidirectional stereo (ODS) video. In contrast to recent approaches that create novel views for individual panorama frames, we introduce a video‐specific temporally‐consistent multi‐sphere image (MSI) scene representation. Given a conventional ODS video, we first extract information by estimating framewise descriptive feature maps. Then, we optimize the global MSI model using theory from recent research on neural radiance fields. Instead of a continuous scene function, this multi‐sphere image (MSI) representation depicts colour and density information only for a discrete set of concentric spheres. To further improve the temporal consistency of our results, we apply an ancillary refinement step which optimizes the temporal coherency between successive video frames. Direct comparisons to recent baseline approaches show that our global MSI optimization yields superior performance in terms of visual quality. Our code and data will be made publicly available.Item Next Event Estimation++: Visibility Mapping for Efficient Light Transport Simulation(The Eurographics Association and John Wiley & Sons Ltd., 2020) Guo, Jerry Jinfeng; Eisemann, Martin; Eisemann, Elmar; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueMonte-Carlo rendering requires determining the visibility between scene points as the most common and compute intense operation to establish paths between camera and light source. Unfortunately, many tests reveal occlusions and the corresponding paths do not contribute to the final image. In this work, we present next event estimation++ (NEE++): a visibility mapping technique to perform visibility tests in a more informed way by caching voxel to voxel visibility probabilities. We show two scenarios: Russian roulette style rejection of visibility tests and direct importance sampling of the visibility. We show applications to next event estimation and light sampling in a uni-directional path tracer, and light-subpath sampling in Bi-Directional Path Tracing. The technique is simple to implement, easy to add to existing rendering systems, and comes at almost no cost, as the required information can be directly extracted from the rendering process itself. It discards up to 80% of visibility tests on average, while reducing variance by ~20% compared to other state-of-the-art light sampling techniques with the same number of samples. It gracefully handles complex scenes with efficiency similar to Metropolis light transport techniques but with a more uniform convergence.Item Optimizing Temporal Stability in Underwater Video Tone Mapping(The Eurographics Association, 2023) Franz, Matthias; Thang, B. Matthias; Sackhoff, Pascal; Scholz, Timon; Möller, Jannis; Grogorick, Steve; Eisemann, Martin; Guthe, Michael; Grosch, ThorstenIn this paper, we present an approach for temporal stabilization of depth-based underwater image tone mapping methods for application to monocular RGB video. Typically, the goal is to improve the colors of focused objects, while leaving more distant regions nearly unchanged, to preserve the underwater look-and-feel of the overall image. To do this, many methods rely on estimated depth to control the recolorization process, i.e., to enhance colors (reduce blue tint) only for objects close to the camera. However, while single-view depth estimation is usually consistent within a frame, it often suffers from inconsistencies across sequential frames, resulting in color fluctuations during tone mapping. We propose a simple yet effective inter-frame stabilization of the computed depth maps to achieve stable tone mapping results. The evaluation of eight test sequences shows the effectiveness in a wide range of underwater scenarios.Item PlenopticPoints: Rasterizing Neural Feature Points for High-Quality Novel View Synthesis(The Eurographics Association, 2023) Hahlbohm, Florian; Kappel, Moritz; Tauscher, Jan-Philipp; Eisemann, Martin; Magnor, Marcus; Guthe, Michael; Grosch, ThorstenThis paper presents a point-based, neural rendering approach for complex real-world objects from a set of photographs. Our method is specifically geared towards representing fine detail and reflective surface characteristics at improved quality over current state-of-the-art methods. From the photographs, we create a 3D point model based on optimized neural feature points located on a regular grid. For rendering, we employ view-dependent spherical harmonics shading, differentiable rasterization, and a deep neural rendering network. By combining a point-based approach and novel regularizers, our method is able to accurately represent local detail such as fine geometry and high-frequency texture while at the same time convincingly interpolating unseen viewpoints during inference. Our method achieves about 7 frames per second at 800×800 pixel output resolution on commodity hardware, putting it within reach for real-time rendering applications.