Browsing by Author "Trapp, Matthias"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Consistent Filtering of Videos and Dense Light-Fields Without Optic-Flow(The Eurographics Association, 2019) Shekhar, Sumit; Semmo, Amir; Trapp, Matthias; Tursun, Okan; Pasewaldt, Sebastian; Myszkowski, Karol; Döllner, Jürgen; Schulz, Hans-Jörg and Teschner, Matthias and Wimmer, MichaelA convenient post-production video processing approach is to apply image filters on a per-frame basis. This allows the flexibility of extending image filters-originally designed for still images-to videos. However, per-image filtering may lead to temporal inconsistencies perceived as unpleasant flickering artifacts, which is also the case for dense light-fields due to angular inconsistencies. In this work, we present a method for consistent filtering of videos and dense light-fields that addresses these problems. Our assumption is that inconsistencies-due to per-image filtering-are represented as noise across the image sequence. We thus perform denoising across the filtered image sequence and combine per-image filtered results with their denoised versions. At this, we use saliency based optimization weights to produce a consistent output while preserving the details simultaneously. To control the degree-of-consistency in the final output, we implemented our approach in an interactive real-time processing framework. Unlike state-of-the-art inconsistency removal techniques, our approach does not rely on optic-flow for enforcing coherence. Comparisons and a qualitative evaluation indicate that our method provides better results over state-of-the-art approaches for certain types of filters and applications.Item FERMIUM: A Framework for Real-time Procedural Point Cloud Animation and Morphing(The Eurographics Association, 2021) Wegen, Ole; Böttger, Florence; Döllner, Jürgen; Trapp, Matthias; Andres, Bjoern and Campen, Marcel and Sedlmair, MichaelThis paper presents a framework for generating real-time procedural animations and morphing of 3D point clouds. Point clouds or point-based geometry of varying density can easily be acquired using LiDAR cameras or modern smartphones with LiDAR sensors. This raises the question how this raw data can directly be used in the creative industry to create novel digital content using animations. For this purpose, we describe a framework that enables the implementation and combination of animation effects for point clouds. It takes advantage of graphics hardware capabilities and enables the processing of complex datasets comprising up to millions of points. In addition, we compare and evaluate implementation variants for the subsequent morphing of multiple 3D point clouds.Item Interactive Control over Temporal Consistency while Stylizing Video Streams(The Eurographics Association and John Wiley & Sons Ltd., 2023) Shekhar, Sumit; Reimann, Max; Hilscher, Moritz; Semmo, Amir; Döllner, Jürgen; Trapp, Matthias; Ritschel, Tobias; Weidlich, AndreaImage stylization has seen significant advancement and widespread interest over the years, leading to the development of a multitude of techniques. Extending these stylization techniques, such as Neural Style Transfer (NST), to videos is often achieved by applying them on a per-frame basis. However, per-frame stylization usually lacks temporal consistency, expressed by undesirable flickering artifacts. Most of the existing approaches for enforcing temporal consistency suffer from one or more of the following drawbacks: They (1) are only suitable for a limited range of techniques, (2) do not support online processing as they require the complete video as input, (3) cannot provide consistency for the task of stylization, or (4) do not provide interactive consistency control. Domain-agnostic techniques for temporal consistency aim to eradicate flickering completely but typically disregard aesthetic aspects. For stylization tasks, however, consistency control is an essential requirement as a certain amount of flickering adds to the artistic look and feel. Moreover, making this control interactive is paramount from a usability perspective. To achieve the above requirements, we propose an approach that stylizes video streams in real-time at full HD resolutions while providing interactive consistency control. We develop a lite optical-flow network that operates at 80 Frames per second (FPS) on desktop systems with sufficient accuracy. Further, we employ an adaptive combination of local and global consistency features and enable interactive selection between them. Objective and subjective evaluations demonstrate that our method is superior to state-of-the-art video consistency approaches. maxreimann.github.io/stream-consistencyItem Teaching Data-driven Video Processing via Crowdsourced Data Collection(The Eurographics Association, 2021) Reimann, Max; Wegen, Ole; Pasewaldt, Sebastian; Semmo, Amir; Döllner, Jürgen; Trapp, Matthias; Sousa Santos, Beatriz and Domik, GittaThis paper presents the concept and experience of teaching an undergraduate course on data-driven image and video processing. When designing visual effects that make use of Machine Learning (ML) models for image-based analysis or processing, the availability of training data typically represents a key limitation when it comes to feasibility and effect quality. The goal of our course is to enable students to implement new kinds of visual effects by acquiring training datasets via crowdsourcing that are used to train ML models as part of a video processing pipeline. First, we propose our course structure and best practices that are involved with crowdsourced data acquisitions. We then discuss the key insights we gathered from an exceptional undergraduate seminar project that tackles the challenging domain of video annotation and learning. In particular, we focus on how to practically develop annotation tools and collect high-quality datasets using Amazon Mechanical Turk (MTurk) in the budget- and time-constrained classroom environment. We observe that implementing the full acquisition and learning pipeline is entirely feasible for a seminar project, imparts hands-on problem solving skills, and promotes undergraduate research.