2014

Permanent URI for this collection


Temporal Coherency in Video Tone Mapping

Boitard, Ronan

Computational Shape Understanding for 3D Reconstruction and Modeling

Ceylan, Duygu

Visual Exploration of Cardiovascular Hemodynamics

Gasteiger, Rocco

Ray tracing of dynamic scenes

Günther, Johannes

Data-driven methods for interactive visual content creation and manipulation

Jain, Arjun

Precise Depth Image Based Real-Time 3D Difference Detection

Kahn, Svenja

High-Quality Real-Time Global Illumination in Augmented Reality

Kan, Peter

Concepts and Algorithms for the Deformation, Analysis, andCompression of Digital Shapes

von Tycowicz, Christoph

Understanding the Structure of 3D Shapes: PolyCubes and Curve-Skeletons

Livesu, Marco

Kaleidoscopic imaging

Reshetouski, Ilya

Inverse rendering for scene reconstruction in general environments

Wu, Chenglei

Adaptive Semantics Visualization

Nazemi, Kawa

Quad Layouts – Generation and Optimization of Conforming Quadrilateral Surface Partitions

Campen, Marcel

Edit Propagation using Geometric Analogies

Guerrero, Paul

Strategies for efficient parallel visualization

Frey, Steffen

Interactions with Gigantic Point Clouds

Scheiblauer, Claus

Simulation, Animation and Rendering of Crowds in Real-Time

Beacco, Alejandro

Application and validation of capacitive proximity sensing systems in smart environments

Braun, Andreas

Constrained Camera Motion Estimation and 3D Reconstruction

Kurz, Christian

Measurement-Based Model Estimation for Deformable Objects

Miguel, Eder

Image Space Adaptive Rendering

Rousselle, Fabrice

Multi-resolution shape analysis based on discrete Morse decompositions

Federico, Iuricich


Browse

Recent Submissions

Now showing 1 - 22 of 22
  • Item
    Temporal Coherency in Video Tone Mapping
    (Boitard, 2014-10-16) Boitard, Ronan
    One of the main goals of digital imagery is to improve the capture and the reproduction of real or synthetic scenes on display devices with restricted capabilities. Standard imagery techniques are limited with respect to the dynamic range that they can capture and reproduce. High Dynamic Range (HDR) imagery aims at overcoming these limitations by capturing, representing and displaying the physical value of light measured in a scene. However, current commercial displays will not vanish instantly hence backward compatibility between HDR content and those displays is required. This compatibility is ensured through an operation called tone mapping that retargets the dynamic range of HDR content to the restricted dynamic range of a display device. Although many tone mapping operators exist, they focus mostly on still images. The challenges of tone mapping HDR videos are more complex than those of still images since the temporal dimensions is added. In this work, the focus was on the preservation of temporal coherency when performing video tone mapping. Two main research avenues are investigated: the subjective quality of tone mapped video content and their compression efficiency. Indeed, tone mapping independently each frame of a video sequence leads to temporal artifacts. Those artifacts impair the visual quality of the tone mapped video sequence and need to be reduced. Through experimentations with HDR videos and Tone Mapping Operators (TMOs), we categorized temporal artifacts into six categories. We tested video tone mapping operators (techniques that take into account more than a single frame) for the different types of temporal artifact and we observed that they could handle only three out of the six types. Consequently, we designed a post-processing technique that adapts to any tone mapping operator and reduces the three types of artifact not dealt with. A subjective evaluation reported that our technique always preserves or increases the subjective quality of tone mapped content for the sequences and TMOs tested. The second topic investigated was the compression of tone mapped video content. So far, work on tone mapping and video compression focused on optimizing a tone map curve to achieve high compression ratio. These techniques changed the rendering of the video to reduce its entropy hence removing any artistic intent or constraint on the final results. That is why, we proposed a technique that reduces the entropy of a tone mapped video without altering its rendering. Our method adapts the quantization to increase the correlation between successive frames. Results showed an average bit-rate reduction under the same PSNR ranging from 5.4% to 12.8%.
  • Item
    Computational Shape Understanding for 3D Reconstruction and Modeling
    (Ceylan, 2014-06-10) Ceylan, Duygu
    The physical and the digital world are becoming tightly connected as we see an increase in thevariety of 2D and 3D acquisition devices, e.g., smartphones, digital camera, scanners, commercialdepth sensors. The recent advances in the acquisition technologies facilitate the data captureprocess and make it accessible for casual users. This tremendous increase in the digital contentcomes with many application opportunities including medical applications, industrial simulations,documentation of cultural artifacts, visual effects etc.The success of these digital applications depends on two fundamental tasks. On the one hand,our goal is to obtain an accurate and high-quality digital representation of the physical world. Onthe other hand, performing high-level shape analysis, e.g. structure discovery in the underlyingcontent, is crucial. Both of these tasks are extremely challenging due to the large amount ofavailable digital content and the varying data quality of this content including noisy and partialdata measurements. Nonetheless, there exists a tight coupling between these two tasks: accuratelow-level data measurement makes it easier to perform shape analysis, whereas use of suitablesemantic priors provides opportunities to increase the accuracy of the digital data.In this dissertation, we investigate the benefits of tackling the low-level data measurementand high-level shape analysis tasks in a coupled manner for 3D reconstruction and modelingpurposes. We specifically focus on image-based reconstruction of urban areas where we exploitthe abundance of symmetry as the principal shape analysis tool. Use of symmetry and repetitionsare reinforced in architecture due to economic, functional, and aesthetic considerations. Weutilize these priors to simultaneously provide non-local coupling between geometric computationsand extract semantic information in urban data sets.Concurrent to the advances in 3D geometry acquisition and analysis, we are experiencing arevolution in digital manufacturing. With the advent of accessible 3D fabrication methods suchas 3D printing and laser cutting, we see a cyclic pipeline linking the physical and the digitalworlds. While we strive to create accurate digital replicas of real-world objects on one hand,there is a growing user-base in demand of manufacturing the existing content on the other hand.Thus, in the last part of this dissertation, we extend our shape understanding tools to the problemof designing and fabricating functional models. Each manufacturing device comes withtechnology-specific limitations and thus imposes various constraints on the digital models thatcan be fabricated. We demonstrate that, a good level of shape understanding is necessary to optimize the digital content for fabrication.
  • Item
    Visual Exploration of Cardiovascular Hemodynamics
    (Gasteiger, 2014-02-07) Gasteiger, Rocco
    Cardiovascular diseases (CVD) are the most common cause of death worldwideand can lead to fatal consequences for the patient. Relevant examples of CVDs areacquired or congenital heart failures, stenosis and aneurysms. Among the variouscauses of such diseases, hemodynamic information plays an important role and isin focus of current clinical and biomedical research. Thereby, the term hemodynamicscomprises quantitative and qualitative blood flow information in the heart, thevessels or corresponding vessel pathology. This includes, for example, blood flowvelocity, inflow behavior, wall shear stress and vortex structures. Investigationshave shown that hemodynamic information may provide hints about the initiation,existence, progression and severity of a particular CVD. An important partof these investigations is a visual exploration and qualitative analysis, respectively,of the complex morphological and hemodynamic datasets for which the thesis athand achieves new contributions.The data acquisition of the hemodynamic information relies primarily on MRIimaging and simulation, whereby the thesis describes essential data processingsteps for both modalities. Existent visual exploration approaches and relevant applicationareas from the clinical and biomedical research domain are discussed,which are used to derive three research goals of the thesis. These goals consistof the development of a new visualization method to expressively depict vesselmorphology with embedded flow information, an automatic extraction approachof qualitative hemodynamic parameters as well as a flexible focus-and-context approachto investigate multiple hemodynamic information. Although the proposedmethods focus on simulated hemodynamics in cerebral aneurysms, this thesis alsodemonstrates their application to other vessel domains and measured flow data.The achieved results are evaluated and discussed with clinicians as well asbiomedical and simulation experts, who are involved in the data analysis of hemodynamicinformation. The obtained insights are incorporated into recommendationsand challenges for future works in this field.
  • Item
    Ray tracing of dynamic scenes
    (Günther, Johannes, 2014-10-24) Günther, Johannes
    In the last decade ray tracing performance reached interactive frame rates for nontrivial scenes, which roused the desire to also ray trace dynamic scenes. Changing the geometry of a scene, however, invalidates the precomputed auxiliary data-structures needed to accelerate ray tracing. In this thesis we review and discuss several approaches to deal with the challenge of ray tracing dynamic scenes. In particular we present the motion decomposition approach that avoids the invalidation of acceleration structures due to changing geometry. To this end, the animated scene is analyzed in a preprocessing step to split it into coherently moving parts. Because the relative movement of the primitives within each part is small it can be handled by special, pre-built kd-trees. Motion decomposition enables ray tracing of predefined animations and skinned meshed at interactive frame rates. Our second main contribution is the streamed binning approach. It approximates the evaluation of the cost function that governs the construction of optimized kd-trees and BVHs. As a result, construction speed especially for BVHs can be increased by one order of magnitude while still maintaining their high quality for ray tracing.
  • Item
    Data-driven methods for interactive visual content creation and manipulation
    (Jain, Arjun, 2014-03-19) Jain, Arjun
    Software tools for creating and manipulating visual content --- be they for images, video or 3D models --- are often difficult to use and involve a lot of manual interaction at several stages of the process. Coupled with long processing and acquisition times, content production is rather costly and poses a potential barrier to many applications. Although cameras now allow anyone to easily capture photos and video, tools for manipulating such media demand both artistic talent and technical expertise. However, at the same time, vast corpuses with existing visual content such as Flickr, YouTube or Google 3D Warehouse are now available and easily accessible. This thesis proposes a data-driven approach to tackle the above mentioned problems encountered in content generation. To this end, statistical models trained on semantic knowledge harvested from existing visual content corpuses are created. Using these models, we then develop tools which are easy to learn and use, even by novice users, but still produce high-quality content. These tools have intuitive interfaces, and enable the user to have precise and flexible control. Specifically, we apply our models to create tools to simplify the tasks of video manipulation, 3D modeling and material assignment to 3D objects.
  • Item
    Precise Depth Image Based Real-Time 3D Difference Detection
    (Kahn, 2014-03-25) Kahn, Svenja
    Then, this thesis answers Q2 by providing solutions for enhancing the 3D difference detection accuracy, both by precise pose estimation and by reducing depth measurement noise. A precise variant of the 3D difference detection concept is proposed, which combines two main aspects. First, the precision of the depth camera s pose estimation is improved by coupling the depth camera with a very precise coordinate measuring machine. Second, measurement noise of the captured depth images is reduced and missing depth information is filled in by extending the 3D difference detection with 3D reconstruction.The accuracy of the proposed 3D difference detection is quantified by a quantitative evaluation. This provides an anwer to Q3. The accuracy is evaluated both for the basic setup and for the variants that focus on a high precision. The quantitative evaluation using real-world data covers both the accuracy which can be achieved with a time-of-flight camera (SwissRanger 4000) and with a structured light depth camera (Kinect). With the basic setup and the structured light depth camera, differences of 8 to 24 millimeters can be detected from one meter measurement distance. With the enhancements proposed for precise 3D difference detection, differences of 4 to 12 millimeters can be detected from one meter measurement distance using the same depth camera.By solving the challenges described by the three research question, this thesis provides a solution for precise real-time 3D difference detection based on depth images. With the approach proposed in this thesis, dense 3D differences can be detected in real time and from arbitrary viewpoints using a single depth camera. Furthermore, by coupling the depth camera with a coordinate measuring machine and by integrating 3D reconstruction in the 3D difference detection, 3D differences can be detected in real time and with a high precision.
  • Item
    High-Quality Real-Time Global Illumination in Augmented Reality
    (Kan, 2014-08-22) Kan, Peter
    High-quality image synthesis, indistinguishable from reality, has been one of the most important problems in computer graphics from its beginning. Image synthesis in augmented reality (AR) poses an even more challenging problem, because coherence of virtual and real objects is required. Especially, visual coherence plays an important role in AR. Visual coherence can be achieved by calculating global illumination which introduces the light interaction between virtual and real objects. Correct light interaction provides precise information about spatial location, radiometric properties, and geometric details of inserted virtual objects. In order to calculate light interaction accurately, high-quality global illumination is required. However, high-quality global illumination algorithms have not been suitable for real-time AR due to their high computational cost. Global illumination in AR can be beneficial in many areas including automotive or architectural design, medical therapy, rehabilitation, surgery, education, movie production, and others.This thesis approaches the problem of visual coherence in augmented reality by adopting the physically based rendering algorithms and presenting a novel GPU implementation of these algorithms. The developed rendering algorithms calculate the two solutions of global illumination, required for rendering in AR, in one pass by using a novel one-pass differential rendering algorithm. The rendering algorithms, presented in this thesis, are based on GPU ray tracing which provides high quality results. The developed rendering system computes various visual features in high quality. These features include depth of field, shadows, specular and diffuse global illumination, reflections, and refractions. Moreover, numerous improvements of the physically based rendering algorithms are presented which allow fast and accurate light transport calculation in AR. Additionally, this thesis presents the differential progressive path tracing algorithm which can calculate the unbiased AR solution in a progressive fashion.Finally, the presented methods are compared to the state of the art in real-time global illumination for AR. The results show that our high-quality global illumination outperforms other methods in terms of accuracy of the rendered images. Additionally, the human perception of developed global illumination methods for AR is evaluated. The impact of the presented rendering algorithms to visual realism and to the sense of presence is studied in this thesis. The results suggest that high-quality global illumination has a positive impact on the realism and presence perceived by users in AR. Thus, future AR applications can benefit from the algorithms developed in this thesis.
  • Item
    Concepts and Algorithms for the Deformation, Analysis, andCompression of Digital Shapes
    (von Tycowicz, 2014-05-05) von Tycowicz, Christoph
    We propose new model reduction techniques for the construction of reducedshape spaces of deformable objects and for the approximation ofreduced internal forces that accelerate the construction of a reduced dynamicalsystem, increase the accuracy of the approximation, and simplifythe implementation of model reduction. Based on the model reduction techniques, we propose frameworks fordeformation-based modeling and simulation of deformable objects thatare interactive, robust and intuitive to use. We devise efficient numericalmethods to solve the inherent nonlinear problems that are tailored tothe reduced systems. We demonstrate the effectiveness in different experimentswith elastic solids and shells and compare them to alternativeapproaches to illustrate the high performance of the frameworks. We study the spectra and eigenfunctions of discrete differential operatorsthat can serve as an alternative to the discrete Laplacians for applicationsin shape analysis. In particular, we construct such operators asthe Hessians of deformation energies, which are in consequence sensitiveto the extrinsic curvature, e.g., sharp bends. Based on the spectraand eigenmodes, we derive the vibration signature that can be used tomeasure the similarity of points on a surface. By taking advantage of structural regularities inherent to adaptive multiresolutionmeshes, we devise a lossless connectivity compression thatexceeds state-of-the-art coders by a factor of two to seven. In addition,we provide extensions to sequences of meshes with varying refinementthat reduce the entropy even further. Using improved context modelingto enhance the zerotree coding of wavelet coefficients, we achieve compressionfactors that are four times smaller than those of leading codersfor irregular meshes.
  • Item
    Understanding the Structure of 3D Shapes: PolyCubes and Curve-Skeletons
    (Livesu, 2014-05-23) Livesu, Marco
    Compact representations of three dimensional objects are very often usedin computer graphics to create effective ways to analyse, manipulate andtransmit 3D models. Their ability to abstract from the concrete shapes andexpose their structure is important in a number of applications, spanningfrom computer animation, to medicine, to physical simulations. This thesiswill investigate new methods for the generation of compact shape representations.In the first part, the problem of computing optimal PolyCube basecomplexes will be considered. PolyCubes are orthogonal polyhedra usedin computer graphics to map both surfaces and volumes. Their ability toresemble the original models and at the same time expose a very simple andregular structure is important in a number of applications, such as texturemapping, spline fitting and hex-meshing. The second part will focus onmedial descriptors. In particular, two new algorithms for the generationof curve-skeletons will be presented. These methods are completely basedon the visual appearance of the input, therefore they are independent fromthe type, number and quality of the primitives used to describe a shape,determining, thus, an advancement to the state of the art in the field.
  • Item
    Kaleidoscopic imaging
    (Reshetouski, Ilya, 2014-11-06) Reshetouski, Ilya
    Kaleidoscopes have a great potential in computational photography as a tool for redistributing light rays. In time-of-flight imaging the concept of the kaleidoscope is also useful when dealing with the reconstruction of the geometry that causes multiple reflections. This work is a step towards opening new possibilities for the use of mirror systems as well as towards making their use more practical. The focus of this work is the analysis of planar kaleidoscope systems to enable their practical applicability in 3D imaging tasks. We analyse important practical properties of mirror systems and develop a theoretical toolbox for dealing with planar kaleidoscopes. Based on this theoretical toolbox we explore the use of planar kaleidoscopes for multi-view imaging and for the acquisition of 3D objects. The knowledge of the mirrors positions is crucial for these multi-view applications. On the other hand, the reconstruction of the geometry of a mirror room from time-of-flight measurements is also an important problem. We therefore employ the developed tools for solving this problem using multiple observations of a single scene point.
  • Item
    Inverse rendering for scene reconstruction in general environments
    (Wu, Chenglei, 2014-07-10) Wu, Chenglei
    Demand for high-quality 3D content has been exploding recently, owing to the advances in 3D displays and 3D printing. However, due to insufficient 3D content, the potential of 3D display and printing technology has not been realized to its full extent. Techniques for capturing the real world, which are able to generate 3D models from captured images or videos, are a hot research topic in computer graphics and computer vision. Despite significant progress, many methods are still highly constrained and require lots of prerequisites to succeed. Marker-less performance capture is one such dynamic scene reconstruction technique that is still confined to studio environments. The requirements involved, such as the need for a multi-view camera setup, specially engineered lighting or green-screen backgrounds, prevent these methods from being widely used by the film industry or even by ordinary consumers. In the area of scene reconstruction from images or videos, this thesis proposes new techniques that succeed in general environments, even using as few as two cameras. Contributions are made in terms of reducing the constraints of marker-less performance capture on lighting, background and the required number of cameras. The primary theoretical contribution lies in the investigation of light transport mechanisms for high-quality 3D reconstruction in general environments. Several steps are taken to approach the goal of scene reconstruction in general environments. At first, the concept of employing inverse rendering for scene reconstruction is demonstrated on static scenes, where a high-quality multi-view 3D reconstruction method under general unknown illumination is developed. Then, this concept is extended to dynamic scene reconstruction from multi-view video, where detailed 3D models of dynamic scenes can be captured under general and even varying lighting, and in front of a general scene background without a green screen. Finally, efforts are made to reduce the number of cameras employed. New performance capture methods using as few as two cameras are proposed to capture high-quality 3D geometry in general environments, even outdoors.
  • Item
    Adaptive Semantics Visualization
    (2014-11-27) Nazemi, Kawa
    Human access to the increasing amount of information and data plays an essential role for the professional level and also for everyday life. While information visualization has developed new and remarkable ways for visualizing data and enabling the exploration process, adaptive systems focus on users' behavior to tailor information for supporting the information acquisition process. Recent research on adaptive visualization shows promising ways of synthesizing these two complementary approaches and make use of the surpluses of both disciplines. The emerged methods and systems aim to increase the performance, acceptance, and user experience of graphical data representations for a broad range of users. Although the evaluation results of the recently proposed systems are promising, some important aspects of information visualization are not considered in the adaptation process. The visual adaptation is commonly limited to change either visual parameters or replace visualizations entirely. Further, no existing approach adapts the visualization based on data and user characteristics. Other limitations of existing approaches include the fact that the visualizations require training by experts in the field. In this thesis, we introduce a novel model for adaptive visualization. In contrast to existing approaches, we have focused our investigation on the potentials of information visualization for adaptation. Our reference model for visual adaptation not only considers the entire transformation, from data to visual representation, but also enhances it to meet the requirements for visual adaptation. Our model adapts different visual layers that were identified based on various models and studies on human visual perception and information processing. In its adaptation process, our conceptual model considers the impact of both data and user on visualization adaptation. We investigate different approaches and models and their effects on system adaptation to gather implicit information about users and their behavior. These are than transformed and applied to affect the visual representation and model human interaction behavior with visualizations and data to achieve a more appropriate visual adaptation. Our enhanced user model further makes use of the semantic hierarchy to enable a domain-independent adaptation. To face the problem of a system that requires to be trained by experts, we introduce the canonical user model that models the average usage behavior with the visualization environment. Our approach learns from the behavior of the average user to adapt the different visual layers and transformation steps. This approach is further enhanced with similarity and deviation analysis for individual users to determine similar behavior on an individual level and identify differing behavior from the canonical model. Users with similar behavior get similar visualization and data recommendations, while behavioral anomalies lead to a lower level of adaptation. Our model includes a set of various visual layouts that can be used to compose a multi-visualization interface, a sort of "visualization cockpit". This model facilitates various visual layouts to provide different perspectives and enhance the ability to solve difficult and exploratory search challenges. Data from different data-sources can be visualized and compared in a visual manner. These different visual perspectives on data can be chosen by users or can be automatically selected by the system. This thesis further introduces the implementation of our model that includes additional approaches for an efficient adaptation of visualizations as proof of feasibility. We further conduct a comprehensive user study that aims to prove the benefits of our model and underscore limitations for future work. The user study with overall 53 participants focuses with its four conditions on our enhanced reference model to evaluate the adaptation effects of the different visual layers.
  • Item
    Quad Layouts – Generation and Optimization of Conforming Quadrilateral Surface Partitions
    (2014-12) Campen, Marcel
    The efficient, computer aided or automatic generation of quad layouts, i.e. the partitioning of an object’s surface into simple networks of conforming quadrilateral patches, is a task that – despite its importance and utility in Computer Graphics and Geometric Modeling – received relatively low attention in the past. As a consequence, this task is most often performed manually by well-trained experts in practice, where quad layouts are of particular interest for surface representation and parameterization tasks. Deeper analysis reveals the inherent complexity of this problem, which might be one of the underlying reasons for this situation. In this thesis we investigate the structure of the problem and the commonly relevant quality criteria. Based on this we develop novel efficient solution strategies and algorithms for the generation of high quality quad layouts. In particular, we present a fully automatic as well as an interactive pipeline for this task. Both are based on splitting the hard problem into sub-problems with a simpler structure each. For each sub-problem we design efficient, custom-tailored optimization algorithms motivated by the geometric nature of these problems. In this process we pay attention to compatibility, such that these algorithms can be applied in sequence, forming the stages of efficient quad layouting pipelines. An important aspect of the quad layout problem is the complexity of the quality objective. The quality typically is a function of the layout’s structural complexity, its topological connectivity, and its geometrical embedding, i.e. of discrete, combinatorial, and continuous aspects. Furthermore, application-specific demands can be quite fuzzy and hard to formalize. Our automatic pipeline follows a generic set of quality criteria that are common to most use cases. The best solution to make possible the optimization with respect to more specific design intents is to include the user in the process, enabling them to infuse their expert knowledge. In contrast to prevalent manual construction processes our interactive pipeline supports the user to a large extent. Structural consistency is automatically taken care of, geometrically appropriate design operations are automatically proposed, and next steps that should be taken are indicated. In this way the required effort is reduced to a minimum, while still full design flexibility is provided. Finally, we present novel methods for the computation of geodesic distances and paths on surfaces – for standard as well as anisotropic metrics. These play a critical key role in several parts of our pipelines and shortcomings of available solutions compelled the development of novel alternatives.
  • Item
    Edit Propagation using Geometric Analogies
    (2014-09-19) Guerrero, Paul
    Modeling complex geometrical shapes, like city scenes or terrains with dense vegetation, is a time-consuming task that cannot be automated trivially. The problem of creating and editing many similar, but not identical models requires specialized methods that understand what makes these objects similar in order to either create new variations of these models from scratch or to propagate edit operations from one object to all similar objects. In this thesis, we present new methods to significantly reduce the effort required to model complex scenes. For 2D scenes containing deformable objects, such as fish or snakes, we present a method to find partial matches between deformed shapes that can be used to transfer localized properties such as texture between matching shapes. Shapes are considered similar if they are related by pointwise correspondences and if neighboring points have correspondences with similar transformation parameters. Unlike previous work, this approach allows us to successfully establish matches between strongly deformed objects, even in the presence of occlusions and sparse or unevenly distributed sets of matching features. For scenes consisting of 2D shape arrangements, such as floor plans, we propose methods to find similar locations in the arrangements, even though the arrangements themselves are dissimilar. Edit operations, such as object placements, can be propagated between similar locations. Our approach is based on simple geometric relationships between the location and the shape arrangement, such as the distance of the location to a shape boundary or the direction to the closest shape corner. Two locations are similar of they have many similar relations to their surrounding shape arrangement. To the best of our knowledge, there is no method that explicitly attempts to find similar locations in dissimilar shape arrangements. We demonstrate populating large scenes such as floor plans with hundreds of objects like pieces of furniture, using relatively few edit operations. Additionally, we show that providing several examples of an edit operation helps narrowing down the supposed modeling intention of the user and improves the quality of the edit propagation. A probabilistic model is learned from the examples and used to suggest similar edit operations. Also, extensions are shown that allow application of this method in 3D scenes. Compared to previous approaches that use entire scenes as examples, our method provides more user control and has no need for large databases of example scenes or domain-specific knowledge. We demonstrate generating 3D interior decoration and complex city scenes, including buildings with detailed facades, using only few edit operations.
  • Item
    Strategies for efficient parallel visualization
    (OPUS - Publication Server of the University of Stuttgart, 2014) Frey, Steffen
    Visualization is a crucial tool for analyzing data and gaining a deeper understanding of underlying features. In particular, interactive exploration has shown to be indispensable, as it can provide new insights beyond the original focus of analysis. However, efficient interaction requires almost immediate feedback to user input, and achieving this poses a big challenge for the visualization of data that is ever-growing in size and complexity. This motivates the increasing effort in recent years towards high-performance visualization using powerful parallel hardware architectures. The analysis and rendering of large volumetric grids and time-dependent data is particularly challenging. Despite many years of active research, significant improvements are still required to enable the efficient explorative analysis for many use cases and scenarios. In addition, while many diverse kinds of approaches have been introduced to tackle different angles of the issue, no consistent scheme exists to classify previous efforts and to guide further development. This thesis presents research that enables or improves the interactive analysis in various areas of scientific visualization. To begin with, new techniques for the interactive analysis of time-dependent field and particle data are introduced, focusing both on the expressiveness of the visualization and on a structure allowing for efficient parallel computing. Volume rendering is a core technique in scientific visualization, that induces significant costs. In this work, approaches are presented that decrease this cost by means of a new acceleration data structure, and handle it dynamically by adapting the progressive visualization process on-the-fly based on the estimation of spatio-temporal errors. In addition, view-dependent representations are presented that both reduce the size and render cost of volume data with only minor quality impact for a range of camera configurations. Remote and in-situ rendering approaches are discussed for enabling the interactive volume visualization without having to move the actual volume data. In detail, an approach for the integrated adaptive sampling and compression is introduced, as well as a technique allowing for user prioritization of critical results. Computations are further dynamically redistributed to reduce load imbalance. In detail, this encompasses the tackling of divergence issues on GPUs, the adaptation of volume data assigned to each node for rendering in distributed GPU clusters, and the detailed consideration of the different performance characteristics of the components in a heterogeneous system. From these research projects, a variety of generic strategies towards high-performance visualization is extracted, ranging from the parallelization of the program structure and algorithmic optimization, to the efficient execution on parallel hardware architectures. The introduced strategy tree further provides a consistent and comprehensive hierarchical classification of these strategies. It can provide guidance during development to identify and exploit potentials for improving the performance of visualization applications, and it can be used as expressive taxonomy for research on high-performance visualization and computer graphics.
  • Item
    Interactions with Gigantic Point Clouds
    (2014-06-25) Scheiblauer, Claus
    During the last decade the increased use of laser range-scanners for sampling the environment has led to gigantic point cloud data sets. Due to the size of such data sets, tasks like viewing, editing, or presenting the data have become a challenge per se, as the point data is too large to fit completely into the main memory of a customary computer system. In order to accomplish these tasks and enable the interaction with gigantic point clouds on consumer grade computer systems, this thesis presents novel methods and data structures for efficiently dealing with point cloud data sets consisting of more than 109 point samples. To be able to access point samples fast that are stored on disk or in memory, they have to be spatially ordered, and for this a data structure is proposed which organizes the points samples in a level-of-detail hierarchy. Point samples stored in this hierarchy cannot only be rendered fast, but can also be edited, for example existing points can be deleted from the hierarchy or new points can be inserted. Furthermore, the data structure is memory efficient, as it only uses the point samples from the original data set. Therefore, the memory consumption of the point samples on disk, when stored in this data structure, is comparable to the original data set. A second data structure is proposed for selecting points. This data structure describes a volume inside which point samples are considered to be selected, and this has the advantage that the information about a selection does not have to be stored at the point samples. In addition to these two previously mentioned data structures, which represent novel contributions for point data visualization and manipulation, methods for supporting the presentation of point data sets are proposed. With these methods the user experience can be enhanced when navigating through the data. One possibility to do this is by using regional meshes that employ an out-of-core texturing method to show details in the mesoscopic scale on the surface of sampled objects, and which are displayed together with point clouds. Another possibility to increase the user experience is to use graphs in 3D space, which helps users to orient themselves inside point cloud models of large sites, where otherwise it would be difficult to find the places of interest. Furthermore, the quality of the displayed point cloud models can be increased by using a point size heuristics that can mimic a closed surface in areas that would otherwise appear undersampled, by utilizing the density of the rendered points in the different areas of the point cloud model. Finally, the use of point cloud models as a tool for archaeological work is proposed. Since it becomes increasingly common to document archaeologically interesting monuments with laser scanners, the number application areas of the resulting point clouds is raising as well. These include, but are not limited to, new views of the monument that are impossible when studying the monument on-site, creating cuts and floor plans, or perform virtual anastylosis. All these previously mentioned methods and data structures are implemented in a single software application that has been developed during the course of this thesis and can be used to interactively explore gigantic point clouds.
  • Item
    Simulation, Animation and Rendering of Crowds in Real-Time
    (2014-12-11) Beacco, Alejandro
    Nowadays crowd simulation is becoming more important in computer applications such as building evacuation planning, training, videogames, etc., presenting hundreds or thousands of agents navigating in virtual environments. Some of these applications need to run in real time in order to offer complete interaction with the user. Simulated crowds should seem natural and give a good looking impression to the user. The goal should be to produce both the best motion and animation, while minimizing the awkwardness of movements and eliminating or hiding visual artifacts. Achieving simulation, animation and rendering of crowds in real-time becomes thus a major challenge. Although each of these areas has been studied individually and improvements have been made in the literature, its integration in one real-time system is not straight forward. In the process of integrating animation, simulation and rendering of real time crowds, we need to assume some trade-offs between accuracy and quality of results. The main goal of this thesis is to work on those three aspects of a real-time crowd visualization (simulation, animation and rendering) seeking for possible speed-ups and optimizations allowing us to further increase the number of agents in the simulation, to then integrate them in a real-time system, with the maximum number possible of high quality and natural looking animated agents. In order to accomplish our goal we present new techniques to achieve improvements in each one of these areas: In crowd simulation we work on a multi-domain planning approach and on planning using footsteps instead of just root velocities and positions; in animation we focus on a framework to eliminate foot sliding artifacts and on synthesizing motions of characters to follow footsteps; in rendering we provide novel techniques based on per joint impostors. Finally we present a novel framework to progressively integrate different methods for crowd simulation, animation and rendering. The framework offers level-of-detail for each of these areas, so that as new methods are integrated they can be combined efficiently to improve performance.
  • Item
    Application and validation of capacitive proximity sensing systems in smart environments
    (2014-09-18) Braun, Andreas
    Smart environments feature a number of computing and sensing devices that support occupants in performing their tasks. In the last decades there has been a multitude of advances in miniaturizing sensors and computers, while greatly increasing their performance. As a result new devices are introduced into our daily lives that have a plethora of functions. Gathering information about the occupants is fundamental in adapting the smart environment according to preference and situation. There is a large number of different sensing devices available that can provide information about the user. They include cameras, accelerometers, GPS, acoustic systems, or capacitive sensors. The latter use the properties of an electric field to sense presence and properties of conductive objects within range. They are commonly employed in finger-controlled touch screens that are present in billions of devices. A less common variety is the capacitive proximity sensor. It can detect the presence of the human body over a distance, providing interesting applications in smart environments. Choosing the right sensor technology is an important decision in designing a smart environment application. Apart from looking at previous use cases, this process can be supported by providing more formal methods. In this work I present a benchmarking model that is designed to support this decision process for applications in smart environments. Previous benchmarks for pervasive systems have been adapted towards sensors systems and include metrics that are specific for smart environments. Based on distinct sensor characteristics, different ratings are used as weighting factors in calculating a benchmarking score. The method is verified using popularity matching in two scientific databases. Additionally, there are extensions to cope with central tendency bias and normalization with regards to average feature rating. Four relevant application areas are identified by applying this benchmark to applications in smart environments and capacitive proximity sensors. They are indoor localization, smart appliances, physiological sensing and gesture interaction. Any application area has a set of challenges regarding the required sensor technology, layout of the systems, and processing that can be tackled using various new or improved methods. I will present a collection of existing and novel methods that support processing data generated by capacitive proximity sensors. These are in the areas of sparsely distributed sensors, model-driven fitting methods, heterogeneous sensor systems, image-based processing and physiological signal processing. To evaluate the feasibility of these methods, several prototypes have been created and tested for performance and usability. Six of them are presented in detail. Based on these evaluations and the knowledge generated in the design process, I am able to classify capacitive proximity sensing in smart environments. This classification consists of a comparison to other popular sensing technologies in smart environments, the major benefits of capacitive proximity sensors, and their limitations. In order to support parties interested in developing smart environment applications using capacitive proximity sensors, I present a set of guidelines that support the decision process from technology selection to choice of processing methods.
  • Item
    Constrained Camera Motion Estimation and 3D Reconstruction
    (2014-11-28) Kurz, Christian
    The creation of virtual content from visual data is a tedious task which requires a high amount of skill and expertise. Although the majority of consumers is in possession of multiple imaging devices that would enable them to perform this task in principle, the processing techniques and tools are still intended for the use by trained experts. As more and more capable hardware becomes available, there is a growing need among consumers and professionals alike for new flexible and reliable tools that reduce the amount of time and effort required to create high-quality content. This thesis describes advances of the state of the art in three areas of computer vision: camera motion estimation, probabilistic 3D reconstruction, and template fitting. First, a new camera model geared towards stereoscopic input data is introduced, which is subsequently developed into a generalized framework for constrained camera motion estimation. A probabilistic reconstruction method for 3D line segments is then described, which takes global connectivity constraints into account. Finally, a new framework for symmetry-aware template fitting is presented, which allows the creation of high-quality models from low-quality input 3D scans. Evaluations with a broad range of challenging synthetic and real-world data sets demonstrate that the new constrained camera motion estimation methods provide improved accuracy and flexibility, and that the new constrained 3D reconstruction methods improve the current state of the art.
  • Item
    Measurement-Based Model Estimation for Deformable Objects
    (Universidad Rey Juan Carlos, 2014-11-25) Miguel, Eder
    Deformable objects play a critical role in our life due to their compliance. Clothing and support structures, such as mattresses, are just a few examples of their use. They are so common that an accurate prediction of their behavior under a variety of environments and situations is mandatory in order to design products with the desired functionalities. However, obtaining realistic simulations is a difficult task. Both, an appropriate deformation model and parameters that produce the desired behavior must be used. On one hand, there exist many deformation models for elasticity, but there are few capable of capturing other complex effects that are critical in order to obtain the desired realism. On the other hand, the task of estimating model parameters is usually performed using a trial-and-error method, with the corresponding waste in time. In this thesis we develop novel deformation models and parameter estimation methods that allow us to increase the realism of deformable object simulations. We present deformation models that capture several of these complex effects: hyperelasticity, extreme nonlinearities, heterogeneities and internal friction. In addition, we design parameter estimation methods that take advante of the structure of the measured data and avoid common problems that arise when numerial optimization algorithms are used. First, we focus on cloth and present a novel measurement system that captures the behavior of cloth under a variety of experiments. It produces a complete set of information including the 3D reconstruction of the cloth sample under test as well as the forces being applied. We design a parameter estimation pipeline and use this system to estimate parameters for several popular cloth models and evaluate their performance and suitability in terms of quality of the obtained estimations. We then develop a novel, general and flexible deformation model based on additive energy density terms. By using independent components this model allows us to isolate the effect that each one has on the global behavior of the deformable object, replicate existing deformation models and produce new ones. It also allows us to apply incremental approaches to parameter estimation. We demonstrate its advantages by applying it in a wide variety of scenarios, including cloth simulation, modeling of heterogeneous soft tissue and capture of extreme nonlinearities in finger skin. Finally, a fundamental observation extracted from the estimation of parameters for cloth models is that, in real-world, cloth hysteresis has a huge effect in the mechanical behavior and visual appearance of cloth. The source of hysteresis is the internal friction produced by the interactions between yarns. Mechanically, it can produce very different deformations in the loading or unloading cycles, while visually, it is responsible for effects such as persistent deformations, preferred wrinkles or history-dependent folds. We develop an internal friction model, present a measurement and estimation system that produces elasticity and internal friction parameters, and analyse the visual impact of internal friction in cloth simulation.
  • Item
    Image Space Adaptive Rendering
    (2014-06-06) Rousselle, Fabrice
    In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the MSE by distributing additional samples according to the magnitude of the residual MSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.
  • Item
    Multi-resolution shape analysis based on discrete Morse decompositions
    (2014) Federico, Iuricich
    Representing and efficiently managing scalar fields and morphological information extracted from them is a fundamental issue in several applications including terrain modeling, static and dynamic volume data analysis (i.e. for medical or engineering purposes), and time-varying 3D scalar fields. Data sets have usually a very large size and adhoc methods to reduce their complexity are needed. Morse theory offers a natural and mathematically-sound tool to analyze the structure of a discrete scalar field as well as to represent it in a compact way through decompositions of its domain. Starting from a Morse function, we can decompose the domain of the function into meaningful regions associated with the critical points of the field. Such decompositions, called ascending and descending Morse complexes, are characterized by the integral lines emanating from, or converging to, some critical point of a scalar field. Moreover, another decomposition can be defined by intersecting the ascending and descending Morse complexes which is called the Morse-Smale complex. Unlike the ascending and descending Morse complexes, the Morse-Smale complex decomposes the domain into a set of regions characterized by a uniform flow of the gradient between two critical points. In this thesis, we address the problem of computing and efficiently extracting Morse representations from a scalar field. The starting point of our research is defining a representation for both ascending and descending Morse complexes. We have defined a dual representation for the two Morse complexes, called Morse incidence graph. Then we have fully investigated all the existing algorithms to compute a Morse or Morse-Smale complex. Thus, we have reviewed most important algorithms based on different criteria such as discrete complex used to describe the domain, features extracted by the algorithm, critical points used to perform the extraction and entities used by the segmentation process. Studying such algorithms has led us to investigate the strengths and weaknesses of both the Morse theory adaptations to the discrete case, piecewise-linear Morse theory and the discrete Morse theory due to Forman. We have defined and investigated two dimension-independent simplification operators to simplify a Morse complex and we have defined them in terms of updates on the Morse complexes and on the Morse incidence graph. Thanks to such simplification operators and their dual refinement operators, we have defined and developed a multi-resolution model to extract morphological representations of a given scalar field at different resolution levels. A similar hierarchical approach has been used to define and develop a multi-resolution representation of a cell complex based on homology-preserving simplification and refinement operators which allows us to extract representations of a cell complex at different resolutions, all with the same homology of the original complex, and to efficiently compute homology generators on such complexes.