2013

Permanent URI for this collection


Visual Analytics Approaches for Descriptor Space Comparison and the Exploration of Time Dependent Data

Bremm, Sebastian

Continuity and Interpolation Techniques for Computer Graphics

González García, Francisco

Processing and tracking human motions using optical, inertial, and depth sensors

Helten, Thomas

Advanced Editing Methods for Image and Video Sequences

Granados, Miguel

Volumetric Visualization Techniques of Rigid and Deformable Models for Surgery Simulation

Herrera Asteasu, Imanol

Algorithms and Interfaces for Real-Time Deformation of 2D and 3D Shapes

Jacobson, Alec

Of Assembling Small Sculptures and Disassembling Large Geometry

Kerber, Jens

The Probabilistic Active Shape Model: From Model Construction to Flexible Medical Image Segmentation

Kirschner, Matthias

Curve Analysis with Applications to Archaeology

Kolomenkin, Michael

Reciprocal Shading for Mixed Reality

Knecht, Martin

Interactive Visual Analysis in Automotive Engineering Design

Konyha, Zoltan

Bayesian and Quasi Monte Carlo Spherical Integration for Global Illumination

Marques, Ricardo

Shapes in Vector Fields

Martinez Esturo, Janick

Computational Imaging: Combining Optics, Computation and Perception

Masia, Belen

Smart Interactive Vessel Visualization in Radiology

Mistelbauer, Gabriel

Surface Appearance Estimation from Video Sequences Supervisor

Palma, Gianpaolo

Reconstruction of 3D Models from Images and Point Clouds with Shape Primitives

Reisner-Kollmann, Irene

Data Driven Analysis of Faces from Images

Scherbaum, Kristina

Information Retrieval for Multivariate Research Data Repositories

Scherer, Maximilian

Filtering Techniques for Low-Noise Previews of Interactive Stochastic Ray Tracing

Schwenk, Karsten

Processing Semantically EnrichedContent for Interactive 3D Visualizations

Settgast, Volker

Cache based Optimization of Stencil Computations - An Algorithmic Approach

Shaheen, Mohammed

Statistical Part-based Models for Object Detection in Large 3D Scans

Sunkel, Martin

Interactive Multiresolution and Multiscale Visualization of Large Volume Data

Suter, Susanne.


Browse

Recent Submissions

Now showing 1 - 24 of 24
  • Item
    Visual Analytics Approaches for Descriptor Space Comparison and the Exploration of Time Dependent Data
    (Bremm, 2013-12-02) Bremm, Sebastian
    Modern technologies allow us to collect and store increasing amounts of data. However, their analysis is often difficult. For that reason, Visual Analytics combines data mining and visualization techniques to explore and an- alyze large amounts of complex data. Visual Analytics approaches exist for various problems and applications, but all share the idea of a tight combination of visualization and automatic analysis. Their respective implemen- tations are highly specialized on the given data and the analytical task. In this thesis I present new approaches for two specific topics, visual descriptor space comparison and the analysis of time series. Visual descriptor space comparison enables the user to analyze different representations of complex datasets e.g., phylogenetic trees or chemical compounds. I propose approaches for data sets with hierarchic or unknown structure, each combining an automatic analysis with interactive visualization. For hierarchically organized data, I suggest a novel similarity score embedded in an interactive analysis framework linking different views, each specialized on a particular analytical tasks. This analysis framework is evaluated in cooperation with biologists in the area of phylogenetic research. To extend the scalability of my approach, I introduce CloudTrees, a new vi- sualization technique for the comparison of large trees with thousands of leaves. It reduces overplotting problems by ensuring the visibility of small but important details like high scoring subtrees. For the comparison of data with unknown structure, I assess several state of the art projection quality measures to analyze their capability for descriptor comparison. For the creation of appropriate ground truth test data. I suggest an interactive tool called PCDC for the controlled creation of high dimensional data with different properties like data distribution or number and size of contained clusters. For the visual comparison of unknown structured data, I introduce a technique which bases on the comparison of two dimensional projections of the descriptors using a two dimensional colormap. I present the approach for scatterplots and extended it to Self- Organizing Maps (SOMs) including reliability encoding. I embed the automatic and visual comparison in an interactive analysis pipeline, which automatically calculates a set of representative descriptors out of a larger collection of descriptors. For a deeper analysis of the proposed result and the underlying characteristics of the input data, the analyst can follow each step of the pipeline. The approach is applied to a large set of chemical data in a high throughput screening analysis scenario. For the analysis of time dependent, categorical data I propose a new approach called Time Parallel Sets (TIPS). It focuses on the analysis of group changes of objects in large datasets. Different automatic algorithms identify and select potentially interesting points in time for a detailed analysis. The user can interactively track groups or single objects, add or remove selected points in time or change parameters of the detection algorithms according to the analytical goal. The approach is applied to two scenarios: Emergency evacuation of buildings and tracking of mobile phone calls over long time periods. Large time series can be compressed by transforming them into sequences of symbols whereas each symbol represents a set of similar subsequences in time. For these time sequences, I propose new visual-analytical tools, starting with an interactive, semi-automatic definition of symbol similarity. Based on this, the sequences are visualized using different linked views, each specialized on other analytical problems. As an example usecase, a financial dataset containing the risk estimations and return values of 60 companies over 500 days is analyzed.
  • Item
    Continuity and Interpolation Techniques for Computer Graphics
    (González García, 2013) González García, Francisco
    In Computer Graphics applications, it is a common practice to texture 3D models to applymaterial properties to them. Then, once the models are textured, they are deformedto create new poses that can be more appropriate for the needs of a certain scene andfinally, those models are visualized with a rendering algorithm. So, it is evident that meshtexturing, mesh deformation and rendering are still key parts of Computer Graphics. Inthese fields much research has been done, resulting in methods that allow to create acomputer-aided images in a more flexible, robust and efficient way. Despite this, thereexist improvements to be done, as many of those approaches suffer from continuity problemsthat dumper interpolation procedures. Thus, in this thesis we present algorithms thataddress continuity in key areas of Computer Graphics.In the field of mesh texturing, we introduce a new algorithm, called Continuity Mapping,that allows a continuous mapping of multi-chart textures on 3D models. This type ofparameterizations break a continuous model into a set of disconnected charts in texturespace, making discontinuities appear and causing serious problems for common applicationslike texture filtering and continuous simulations in texture space. Our approachmakes any multi-chart parameterization seamless by the use of a bidirectional mappingbetween areas outside the charts and areas inside, as well as the usage of a set of virtualtriangles that sew the charts for addressing the sampling mismatch produced at chartboundaries. Continuity Mapping does not require any modification of the artist-providedtextures, it is fully automatic and has small memory and computational costs.To deform a model and create new poses, we propose a novel cage-based deformationapproach. Up to now, cage-based deformation techniques were limited to the usage of singlecages because of the presence of continuity problems existing at cage boundaries. Asa consequence, they cannot locally deform a region of a model and the time and memoryconsumption is increased. We introduce *Cages a technique which allows the usage ofmultiple cages enclosing the model, at multiple levels of details for easier and faster meshdeformation. The proposed approach solves the discontinuities of previous approaches bysmoothly blending each cage deformation and allowing the usage of heterogeneous setsof coordinates, giving more flexibility to the final user.Finally, we propose a new rendering acceleration technique, called I-Render, for fast andapproximate Ray Tracing. First, we perform a pre-processing clustering on the input mesh,that builds upon information theoretic tools to group triangles by their similar features.These clusters define regions of smooth variation, as well as regions of sharp transitions(discontinuities). Then, we introduce a new multi-pass rendering algorithm that uses thatinformation to decide which areas of the final image could be interpolated and whichrequire more involved calculations. All this process is carried out completely in screenspace and, as a consequence, our approach can be used in addition to common accelerationspatial data structures. I-Render also supports animated models.
  • Item
    Processing and tracking human motions using optical, inertial, and depth sensors
    (Helten, Thomas, 2013-12-13) Helten, Thomas
    The processing of human motion data constitutes an important strand of research with many applications in computer animation, sport science and medicine. Currently, there exist various systems for recording human motion data that employ sensors of different modalities such as optical, inertial and depth sensors. Each of these sensor modalities have intrinsic advantages and disadvantages that make them suitable for capturing specific aspects of human motions as, for example, the overall course of a motion, the shape of the human body, or the kinematic properties of motions. In this thesis, we contribute with algorithms that exploit the respective strengths of these different modalities for comparing, classifying, and tracking human motion in various scenarios. First, we show how our proposed techniques can be employed, e.g., for real-time motion reconstruction using efficient cross-modal retrieval techniques. Then, we discuss a practical application of inertial sensors-based features to the classification of trampoline motions. As a further contribution, we elaborate on estimating the human body shape from depth data with applications to personalized motion tracking. Finally, we introduce methods to stabilize a depth tracker in challenging situations such as in presence of occlusions. Here, we exploit the availability of complementary inertial-based sensor information.
  • Item
    Advanced Editing Methods for Image and Video Sequences
    (Granados, 2013-09-10) Granados, Miguel
    In the context of image and video editing, this thesis proposes methods for modifying the semantic content of a recorded scene. Two different editing problems are approached: First, the removal of ghosting artifacts from high dynamic range (HDR) images recovered from exposure sequences, and second, the removal of objects from video sequences recorded with and without camera motion. These editings need to be performed in a way that the result looks plausible to humans, but without having to recover detailed models about the content of the scene, e.g. its geometry, reflectance, or illumination.The proposed editing methods add new key ingredients, such as camera noise models and global optimization frameworks, that help achieving results that surpass the capabilities of state-of-the-art methods. Using these ingredients, each proposed method defines local visual properties that approximate well the specific editing requirements of each task. These properties are then encoded into a energy function that, when globally minimized, produces the required editing results. The optimization of such energy functions corresponds to Bayesian inference problems that are solved efficiently using graph cuts.The proposed methods are demonstrated to outperform other state-of-the-art methods. Furthermore, they are demonstrated to work well on complex real-world scenarios that have not been previously addressed in the literature, i.e., highly cluttered scenes for HDR deghosting, and highly dynamic scenes and unconstrained camera motion for object removal from videos.
  • Item
    Volumetric Visualization Techniques of Rigid and Deformable Models for Surgery Simulation
    (Herrera Asteasu, 2013-10-15) Herrera Asteasu, Imanol
    Virtual reality computer simulation is nowadays widely used in various fields, such as aviation, military or medicine. However, the current simulators do not completely fulfill the necessary requirements for some fields. For example, in medicine many requirements have to be met in order to allow a really meaningful simulation. However, most current medical simulators do not adequately meet them, reducing the usability of these simulators for certain aspects. One of these requirements is the visualization, which in the case of medicine has to deal with unusual data sets, i.e. volume datasets. Additionally, training simulation for medicine needs to calculate and visualize the physical deformations of tissue which adds an additional challenge to the visualization in these types of simulators. In order to overcome these limitations, a prototype of a patient specific neurosurgery simulator has been developed. This simulator features a fully volumetric visualization of patient data, physical interaction with the models through the use of haptic devices and realistic physical simulation for the tissues. This thesis presents a study about the visualization methods necessary to achieve high quality visualization in such simulator. The different possibilities for rigid volumetric visualization have been studied. As a result, improvements on the current volumetric visualization frameworks have been done. Additionally, the use of direct volumetric isosurfaces for certain cases have been studied. The resulting visualization scheme has been demonstrated by an intermediate craniotomy simulator. Furthermore, the use of deformable volumetric models has been studied. The necessary algorithms for this type of visualization have been developed and the different rendering options have been experimentally studied. This study gives the necessary information to make informed decisions about the visualization in the neurosurgery simulator prototype.
  • Item
    Algorithms and Interfaces for Real-Time Deformation of 2D and 3D Shapes
    (Jacobson, 2013-05-01) Jacobson, Alec
    This thesis investigates computer algorithms and user interfaces which assist in the process of deforming raster images, vector graphics, geometric models and animated characters in real time. Many recent works have focused on deformation quality, but often at the sacrifice of interactive performance. A goal of this thesis is to approach such high quality but at a fraction of the cost. This is achieved by leveraging the geometric information implicitly contained in the input shape and the semantic information derived from user constraints. Existing methods also often require or assume a particular interface between their algorithm and the user. Another goal of this thesis is to design user interfaces that are not only ancillary to real-time deformation applications, but also endowing to the user, freeing maximal creativity and expressiveness. This thesis first deals with discretizing continuous Laplacian-based energies and equivalent partial differential equations. We approximate solutions to higher-order polyharmonic equations with piecewise-linear triangle meshes in a way that supports a variety of boundary conditions. This mathematical foundation permeates the subsequent chapters. We aim this energy-minimization framework at skinning weight computation for deforming shapes in real-time using linear blend skinning (LBS). We add additional constraints that explicitly enforce boundedness and later, monotonicity. We show that these properties and others are mandatory for intuitive response. Through the boundary conditions of our energy optimization and tetrahedral volume meshes we can support all popular types of user control structures in 2D and 3D. We then consider the skeleton control structure specifically, and show that with small changes to LBS we can expand the space of deformations allowing individual bones to stretch and twist without artifacts. We also allow the user to specify only a subset of the degrees of freedom of LBS, automatically choosing the rest by optimizing nonlinear, elasticity energies within the LBS subspace. We carefully manage the complexity of this optimization so that real-time rates are undisturbed. In fact, we achieve unprecedented rates for nonlinear deformation. This optimization invites new control structures, too: shape-aware inverse kinematics and disconnected skeletons. All our algorithms in 3D work best on volume representations of solid shapes. To ensure their practical relevancy, we design a method to segment inside from outside given a shape represented by a triangle surface mesh with artifacts such as open boundaries, non-manifold edges, multiple connected components and self-intersections. This brings a new level of robustness to the field of volumetric tetrahedral meshing. The resulting quiver of algorithms and interfaces will be useful in a wide range of applications including interactive 3D modeling, 2D cartoon keyframing, detailed image editing, and animations for video games and crowd simulation.
  • Item
    Of Assembling Small Sculptures and Disassembling Large Geometry
    (Kerber, 2013-09-17) Kerber, Jens
    This thesis describes the research results and contributions that have been achieved during the author's doctoral work. It is divided into two independent parts, each of which is devoted to a particular research aspect.The first part covers the true-to-detail creation of digital pieces of art, so-called relief sculptures, from given 3D models. The main goal is to limit the depth of the contained objects with respect to a certain perspective without compromising the initial three-dimensional impression. Here, the preservation of significant features and especially their sharpness is crucial. Therefore, it is necessary to overemphasize fine surface details to ensure their perceptibility in the more complanate relief. Our developments are aimed at amending the flexibility and user-friendliness during the generation process. The main focus is on providing real-time solutions with intuitive usability that make it possible to create precise, lifelike and aesthetic results. These goals are reached by a GPU implementation, the use of efficient filtering techniques, and the replacement of user defined parameters by adaptive values. Our methods are capable of processing dynamic scenes and allow the generation of seamless artistic reliefs which can be composed of multiple elements.The second part addresses the analysis of repetitive structures, so-called symmetries, within very large data sets. The automatic recognition of components and their patterns is a complex correspondence problem which has numerous applications ranging from information visualization over compression to automatic scene understanding. Recent algorithms reach their limits with a growing amount of data, since their runtimes rise quadratically. Our aim is to make even massive data sets manageable. Therefore, it is necessary to abstract features and to develop a suitable, low-dimensional descriptor which ensures an efficient, robust, and purposive search. A simple inspection of the proximity within the descriptor space helps to significantly reduce the number of necessary pairwise comparisons. Our method scales quasi-linearly and allows a rapid analysis of data sets which could not be handled by prior approaches because of their size.
  • Item
    The Probabilistic Active Shape Model: From Model Construction to Flexible Medical Image Segmentation
    (Kirschner, 2013-07-04) Kirschner, Matthias
    Automatic processing of three-dimensional image data acquired with computed tomography or magnetic resonance imaging plays an increasingly important role in medicine. For example, the automatic segmentation of anatomical structures in tomographic images allows to generate three-dimensional visualizations of a patient s anatomy and thereby supports surgeons during planning of various kinds of surgeries.Because organs in medical images often exhibit a low contrast to adjacent structures, and because the image quality may be hampered by noise or other image acquisition artifacts, the development of segmentation algorithms that are both robust and accurate is very challenging. In order to increase the robustness, the use of model-based algorithms is mandatory, as for example algorithms that incorporate prior knowledge about an organ s shape into the segmentation process. Recent research has proven that Statistical Shape Models are especially appropriate for robust medical image segmentation. In these models, the typical shape of an organ is learned from a set of training examples. However, Statistical Shape Models have two major disadvantages: The construction of the models is relatively difficult, and the models are often used too restrictively, such that the resulting segmentation does not delineate the organ exactly.This thesis addresses both problems: The first part of the thesis introduces new methods for establishing correspondence between training shapes, which is a necessary prerequisite for shape model learning. The developed methods include consistent parameterization algorithms for organs with spherical and genus 1 topology, as well as a nonrigid mesh registration algorithm for shapes with arbitrary topology. The second part of the thesis presents a new shape model-based segmentation algorithm that allows for an accurate delineation of organs. In contrast to existing approaches, it is possible to integrate not only linear shape models into the algorithm, but also nonlinear shape models, which allow for a more specific description of an organ s shape variation.The proposed segmentation algorithm is evaluated in three applications to medical image data: Liver and vertebra segmentation in contrast-enhanced computed tomography scans, and prostate segmentation in magnetic resonance images.
  • Item
    Curve Analysis with Applications to Archaeology
    (Kolomenkin, 2013-09-15) Kolomenkin, Michael
    In this thesis we discuss methods for the definition, detection, analysis, and application ofcurves on surfaces. While doubtlessly as important as curves in images, curves on surfacesgained less attention. A number of definitions of curves on surfaces has been proposed.The most famous among them are ridges and valleys. While portraying important objectproperties, ridges and valleys fail to capture the shape of some objects, for example ofobjects with reliefs. We propose a new type of curves, termed relief edges, which addressesthe limitations of the ridges and the valleys, and demonstrate how to compute it effectively.We demonstrate that relief edges portray the shape of some objects more accurately thanother curves. Moreover, we present a novel framework for automatic estimation of theoptimal scale for curve detection on surfaces. This framework enables correct estimationof curves on surfaces of objects consisting of features of multiple scales. It is generic andcan be applied to any type of curve. We define a novel vector field on surfaces, termed theprominent field, which is a smooth direction field perpendicular to the object s features.The prominent field is useful for surface enhancement and visualization. In addition, weaddress the problem of reconstruction of a relief object from a line drawing. Our methodis able to automatically reconstruct reliefs from complex drawings composed of hundredsof lines. Finally, we successfully apply our algorithms to archaeological objects. Theseobjects provide a significant challenge from an algorithmic point of view, since after severalthousand years underground they are seldom as smooth and nice as manually modeledobjects.
  • Item
    Reciprocal Shading for Mixed Reality
    (Knecht, 2013-12-19) Knecht, Martin
    Reciprocal shading for mixed reality aims to integrate virtual objects into real environments in a way that they are in the ideal case indistinguishable from real objects. It is therefore an attractive technology for architectural visualizations, product visualizations and for cultural heritage sites, where virtual objects should be seamlessly merged with real ones. Due to the improved performance of recent graphics hardware, real-time global illumination algorithms are feasible for mixed-reality applications, and thus more and more researchers address realistic rendering for mixed reality.The goal of this thesis is to provide algorithms which improve the visual plausibility of virtual objects in mixed-reality applications. Our contributions are as follows:First, we present five methods to reconstruct the real surrounding environment. In particular, we present two methods for geometry reconstruction, a method for material estimation at interactive frame rates and two methods to reconstruct the color mapping characteristics of the video see-through camera.Second, we present two methods to improve the visual appearance of virtual objects. The first, called differential instant radiosity, combines differential rendering with a global illumination method called instant radiosity to simulate reciprocal shading effects such as shadowing and indirect illumination between real and virtual objects. The second method focuses on the visual plausible rendering of reflective and refractive objects. The high-frequency lighting effects caused by these objects are also simulated with our method.The third part of this thesis presents two user studies which evaluate the influence of the presented rendering methods on human perception. The first user study measured task performance with respect to the rendering mode, and the second user study was set up as a web survey where participants had to choose which of two presented images, showing mixed-reality scenes, they preferred.
  • Item
    Interactive Visual Analysis in Automotive Engineering Design
    (Konyha, 2013-02-01) Konyha, Zoltan
    Computational simulation has become instrumental in the design process in automotive engineering. Virtually all components and subsystems of automobiles can be simulated. The simulation can be repeated many times with varied parameter settings, thereby simulating many possible design choices. Each simulation run can produce a complex, multivariate, and usually timedependent result data set. The engineers goal is to generate useful knowledge from those data. They need to understand the system s behavior, find correlations in the results, conclude how results depend on the parameters, find optimal parameter combinations, and exclude the ones that lead to undesired results.Computational analysis methods are widely used and necessary to analyze simulation data sets, but they are not always sufficient. They typically require that problems and interesting data features can be precisely defined from the beginning. The results of automated analysis of complex problems may be difficult to interpret. Exploring trends, patterns, relations, and dependencies in time-dependent data through statistical aggregates is not always intuitive.In this thesis, we propose techniques and methods for the interactive visual analysis (IVA) of simulation data sets. Compared to computational methods, IVA offers new and different analysis opportunities. Visual analysis utilizes human cognition and creativity, and can also incorporate the experts domain knowledge. Therefore, their insight into the data can be amplified, and also less precisely defined problems can be solved.We introduce a data model that effectively represents the multi-run, time-dependent simulation results as families of function graphs. This concept is central to the thesis, and many of the innovations in this thesis are closely related to it.We present visualization techniques for families of function graphs. Those visualizations, as well as well-known information visualization plots, are integrated into a coordinated multiple views framework. All views provide focus+context visualization. Compositions of brushes spanning several views can be defined iteratively to select interesting features and promote information drill-down. Valuable insight into the spatial aspect of the data can be gained from (generally domain-specific) spatio-temporal visualizations. In this thesis, we propose interactive, glyph-based 3D visualization techniques for the analysis of rigid and elastic multibody system simulations.We integrate the on-demand computation of derived data attributes of families of function graphs into the analysis workflow. This facilitates the selection of deeply hidden data features that cannot be specified by combinations of simple brushes on the original data attributes. The combination of these building blocks supports interactive knowledge discovery. The analyst can build a mental model of the system; explore also unexpected features and relations; and generate, verify or reject hypotheses with visual tools; thereby gaining more insight into the data. Complex tasks, such as parameter sensitivity analysis and optimization can be solved. Although the primary motivation for our work was the analysis of simulation data sets in automotive engineering, we learned that this data model and the analysis procedures we identified are also applicable to several other problem domains. We discuss common tasks in the analysis of data containing families of function graphs.Two case studies demonstrate that the proposed approach is indeed applicable to the analysis of simulation data sets in automotive engineering. Some of the contributions of this thesis have been integrated into a commercially distributed software suite for engineers. This suggests that their impact can extend beyond the visualization research community.
  • Item
    Bayesian and Quasi Monte Carlo Spherical Integration for Global Illumination
    (Marques, 2013-10-22) Marques, Ricardo
    The spherical sampling of the incident radiance function entails a high computational cost. Therefore the illumination integral must be evaluated using a limited set of samples. Such a restriction raises the question of how to obtain the most accurate approximation possible with such a limited set of samples. In this thesis, we show that existing Monte Carlo-based approaches can be improved by fully exploiting the information available which is later used for careful samples placement and weighting. The first contribution of this thesis is a strategy for producing high quality Quasi-Monte Carlo (QMC) sampling patterns for spherical integration by resorting to spherical Fibonacci point sets. We show that these patterns, when applied to the rendering integral, are very simple to generate and consistently outperform existing approaches. Furthermore, we introduce theoretical aspects on QMC spherical integration that, to our knowledge, have never been used in the graphics community, such as spherical cap discrepancy and point set spherical energy. These metrics allow assessing the quality of a spherical point set for a QMC estimate of a spherical integral.In the next part of the thesis, we propose a new theoretical framework for computing the Bayesian Monte Carlo (BMC) quadrature rule. Our contribution includes a novel method of quadrature computation based on spherical Gaussian functions that can be generalized to a broad class of BRDFs (any BRDF which can be approximated by a sum of one or more spherical Gaussian functions) and potentially to other rendering applications. We account for the BRDF sharpness by using a new computation method for the prior mean function. Lastly, we propose a fast hyperparameters evaluation method that avoids the learning step.Our last contribution is the application of BMC with an adaptive approach for evaluating the illumination integral. The idea is to compute a first BMC estimate (using a first sample set) and, if the quality criterion is not met, directly inject the result as prior knowledge on a new estimate (using another sample set). The new estimate refines the previous estimate using a new set of samples, and the process is repeated until a satisfying result is achieved.
  • Item
    Shapes in Vector Fields
    (Martinez Esturo, 2013-10-25) Martinez Esturo, Janick
    Geometric shapes are the basic building blocks of any graphicsrelated application.The effective manipulation of shapes is therefore of central interest formany relevant problems.In particular, there is a growing demand for high-quality nonlineardeformations for shape modeling and animation.The application of vector fields that guide a continuous deformation is apractical approach for their computation.It turns out that typically challenging nonlinear problems can be solved inan elegant way using such vector field-based methodologies.This thesis presents novel approaches and prospects for vector field-basedmanipulation of geometric shapes (Part I).Thereafter, also the definition of geometric shapes by means ofvector fields is examined (Part II).Depending on the specific shape representation and the concrete modelingproblem, different types of vector fields are required:a family of generalized vector field energies is introduced that enablesnear-isometric, near-conformal, as well as near-authalic continuousdeformations of planar and volumetric shapes.It is demonstrated how near-isometric surface and volume-preservingisosurface deformations are computed by a similar framework.Furthermore, an integration-based pose correction method is presented.Based on a generic energy description that incorporates energy smoothness, aconceptual simple but effective generalized energy regularization isproposed, which is not only beneficial for continuous deformations butadditionally enhances a variety of related geometry processing methods.In the second part of the thesis vector fields are not considered torepresent deformations anymore.Instead, they are interpreted as flow fields that define characteristicshapes such as stream surfaces:a deformation-based approach for interactive flow exploration and theextraction of flow-tangential and flow-orthogonal surfaces is proposed.It is shown how an unified computational framework yields parametrizationsthat are particularly useful for surface-based flow illustrations.Finally, an automatic method for the selection of relevant stream surfacesin complex flow data sets is presented that is based on a new surface-basedintrinsic quality measure.The usefulness of the newly developed methods is shown by applying them to anumber of different geometry processing and visualization problems.
  • Item
    Computational Imaging: Combining Optics, Computation and Perception
    (Masia, 2013-12-11) Masia, Belen
    This thesis presents contributions on the different stages of the imaging pipeline,from capture to display, and including interaction as well; we embrace all of themunder the concept of Computational Imaging. The addressed topics are diverse,but the driving force and common thread has been the conviction that a combinationof improved optics and hardware (optics), computation and signal processing(computation), and insights from how the human visual system works (perception)are needed for and will lead to significant advances in the imaging pipeline. Inparticular, we present contributions in the areas of: coded apertures for defocusdeblurring, reverse tone mapping, disparity remapping for automultiscopic andstereoscopic displays, visual comfort when viewing stereo content, interactionparadigms for light field editing, and femto-photography and transient imaging.
  • Item
    Smart Interactive Vessel Visualization in Radiology
    (Mistelbauer, 2013-11-25) Mistelbauer, Gabriel
    Cardiovascular diseases occur with increasing frequency in our society. Their diagnosis often requires tailored visualization techniques, e.g., to examine the blood flow channel in case of luminal narrowing. Curved Planar Reformation (CPR) addresses this field by creating longitudinal sections along the centerline of blood vessels. With the possibility to rotate around an axis, the entire vessel can be assessed for possible vascular abnormalities (e.g., calcifications on the vessel wall, stenoses, and occlusions).In this thesis, we present a visualization technique, called Centerline Reformation (CR), that offers the possibility to investigate the interior of any blood vessel, regardless of its spatial orientation. Starting from the projected vessel centerlines, the lumen of any vessel is generated by employing wavefront propagation in image space. The vessel lumen can be optionally delineated by halos, to enhance spatial relationships when examining a dense vasculature. We present our method in a focus+context setup, by rendering a different kind of visualization around the lumen. We explain how to resolve correct visibility of multiple overlapping vessels in image space. Additionally, our visualization method allows the examination of a complex vasculature by means of interactive vessel filtering and subsequent visual querying.We propose an improved version of the Centerline Reformation (CR) technique, by generating a completely three-dimensional reformation of vascular structures using ray casting. We call this process Curved Surface Reformation (CSR). In this method, the cut surface is smoothly extended into the surrounding tissue of the blood vessels. Moreover, automatically generated cutaways reveal as much of the vessel lumen as possible, while still retaining correct visibility. This technique offers unrestricted navigation within the inspected vasculature and allows diagnosis of any tubular structure, regardless of its spatial orientation.The growing amount of data requires increasing knowledge from a user in order to select the appropriate visualization method for their analysis. In this thesis, we present an approach that externalizes the knowledge of domain experts in a human readable form and employs an inference system to provide only suitable visualization techniques for clinical diagnosis, namely Smart Super Views. We discuss the visual representation of such automatically suggested visualizations by encoding the respective relevance into shape and size of their view. By providing a smart spatial arrangement and integration, the image becomes the menu itself. Such a system offers a guided medical diagnosis by domain experts.After presenting the approach in a general setting, we describe an application scenario for diagnostic vascular visualization techniques. Since vascular structures usually consist of many vessels, we describe an anatomical layout for the investigation of the peripheral vasculature of the human lower extremities. By aggregating the volumetric information around the vessel centerlines in a circular fashion, we provide only a single static image for the assessment of the vessels. We call this method Curvicircular Feature Aggregation (CFA). In addition, we describe a stability analysis on the local deviations of the centerlines of vessels to determine potentially imprecise definitions. By conveying this information in the visualization, a fast visual analysis of the centerline stability is feasible.
  • Item
    Surface Appearance Estimation from Video Sequences Supervisor
    (Palma, 2013-06-07) Palma, Gianpaolo
    The realistic virtual reproduction of real world objects using Computer Graphicstechniques requires the accurate acquisition and reconstruction of both 3D geometryand surface appearance. The ability to play interactively with the reflectance,changing the view and the light(s) direction, is mandatory in most applications. Inmany cases, image synthesis should be based on real, sampled data: synthetic imagesshould comply with sampled images of the real artwork. Unfortunately, in severalapplication contexts, such as Cultural Heritage (CH), the reflectance acquisition canbe very challenging due to the type of object to acquire and the digitization conditions.Although several methods have been proposed for the acquisition of object reflectance,some intrinsic limitations still make reflectance acquisition a complex task for CH artworks:the use of specialized instruments (dome, special setup for camera and light source, etc.)that require to move the artwork from its usual location; the need of highly controlledacquisition environments, such as a dark room, which are difficult to be reproduced instandard environments (such as museums, historical buildings, outdoor locations, etc.);the difficulty to extend to objects of arbitrary shape and size; the high level of expertise requiredto assess the quality of the acquired surface appearance.This thesis proposes novel solutions for the acquisition and the estimation of thesurface appearance infixed and uncontrolled lighting conditions with several degreeof approximations (from a perceived near diffuse color to a SVBRDF), taking advantageof the main features thatdifferentiate a video sequences from an unorderedphotos collections: the temporal coherence; the data redundancy; the easy of theacquisition, which allows acquisition of many views of the object in a short time.Finally, Reflectance Transformation Imaging (RTI) is an example of widely usedtechnology for the acquisition of the surface appearance in the CHfield, even iflimited to single view Reflectance Fields of nearly at objects. In this context, thethesis addresses also two important issues in RTI usage: how to provide better andmore flexible virtual inspection capabilities with a set of operators that improve theperception of details, features and overall shape of the artwork; how to increase thepossibility to disseminate this data and to support remote visual inspection of bothscholar and ordinary public.
  • Item
    Reconstruction of 3D Models from Images and Point Clouds with Shape Primitives
    (Reisner-Kollmann, 2013-03-12) Reisner-Kollmann, Irene
    3D models are widely used in different applications, including computer games, planning software, applications for training and simulation, and virtual city maps. For many of these applications it is necessary or at least advantageous, if the virtual 3D models are based on real world scenes and objects. Manual modeling is reserved for experts as it requires extensive skills. For this reason, it is necessary to provide automatic or semi-automatic, easy-to-use techniques for reconstructing 3D objects.In this thesis we present methods for reconstructing 3D models of man-made scenes. These scenes can often be approximated with a set of geometric primitives, like planes or cylinders. Using geometric primitives leads to light-weight, low-poly 3D models, which are beneficial for efficient storage and post-processing.The applicability of reconstruction algorithms highly depends on the existing input data, the characteristics of the captured objects, and the desired properties of the reconstructed 3D model. For this reason, we present three algorithms that use different input data. It is possible to reconstruct 3D models from just a few photographs or to use a dense point cloud as input. Furthermore, we present techniques to combine information from both, images and point clouds.The image-based reconstruction method is especially designed for environments with homogenous and reflective surfaces where it is difficult to acquire reliable point sets. Therefore we use an interactive application which requires user input. Shape primitives are fit to user-defined segmentations in two or more images.Our point-based algorithms, on the other hand, provide fully automatic reconstructions. Nevertheless, the automatic computations can be enhanced by manual user inputs for generating improved results. The first point-based algorithm is specialized on reconstructing 3D models of buildings and uses unstructured point clouds as input. The point cloud is segmented into planar regions and converted into 3D geometry.The second point-based algorithm additionally supports the reconstruction of interior scenes. While unstructured point clouds are supported as well, this algorithm specifically exploits the redundancy and visibility information provided by a set of range images. The data is automatically segmented into geometric primitives. Then the shape boundaries are extracted either automatically or interactively.
  • Item
    Data Driven Analysis of Faces from Images
    (Scherbaum, Kristina, 2013-09-17) Scherbaum, Kristina
    This thesis proposes three new data-driven approaches to detect, analyze, or modify faces in images. All presented contributions are inspired by the use of prior knowledge and they derive information about facial appearances from pre-collected databases of images or 3D face models. First, we contribute an approach that extends a widely-used monocular face detector by an additional classifier that evaluates disparity maps of a passive stereo camera. The algorithm runs in real-time and significantly reduces the number of false positives compared to the monocular approach. Next, with a many-core implementation of the detector, we train view-dependent face detectors based on tailored views which guarantee that the statistical variability is fully covered. These detectors are superior to the state of the art on a challenging dataset and can be trained in an automated procedure. Finally, we contribute a model describing the relation of facial appearance and makeup. The approach extracts makeup from before/after images of faces and allows to modify faces in images. Applications such as machine-suggested makeup can improve perceived attractiveness as shown in a perceptual study. In summary, the presented methods help improve the outcome of face detection algorithms, ease and automate their training procedures and the modification of faces in images. Moreover, their data-driven nature enables new and powerful applications arising from the use of prior knowledge and statistical analyses.
  • Item
    Information Retrieval for Multivariate Research Data Repositories
    (Scherer, 2013-12-02) Scherer, Maximilian
    In this dissertation, I tackle the challenge of information retrieval for multivariate research data by providing novel means of content-based access.Large amounts of multivariate data are produced and collected in different areas of scientific research and industrial applications, including the human or natural sciences, the social or economical sciences and applications like quality control, security and machine monitoring. Archival and re-use of this kind of data has been identified as an important factor in the supply of information to support research and industrial production. Due to increasing efforts in the digital library community, such multivariate data are collected, archived and often made publicly available by specialized research data repositories. A multivariate research data document consists of tabular data with $m$ columns (measurement parameters, e.g., temperature, pressure, humidity, etc.) and $n$ rows (observations). To render such data-sets accessible, they are annotated with meta-data according to well-defined meta-data standard when being archived. These annotations include time, location, parameters, title, author (and potentially many more) of the document under concern. In particular for multivariate data, each column is annotated with the parameter name and unit of its data (e.g., water depth [m]).The task of retrieving and ranking the documents an information seeker is looking for is an important and difficult challenge. To date, access to this data is primarily provided by means of annotated, textual meta-data as described above. An information seeker can search for documents of interest, by querying for the annotated meta-data. For example, an information seeker can retrieve all documents that were obtained in a specific region or within a certain period of time. Similarly, she can search for data-sets that contain a particular measurement via its parameter name or search for data-sets that were produced by a specific scientist. However, retrieval via textual annotations is limited and does not allow for content-based search, e.g., retrieving data which contains a particular measurement pattern like a linear relationship between water depth and water pressure, or which is similar to example data the information seeker provides.In this thesis, I deal with this challenge and develop novel indexing and retrieval schemes, to extend the established, meta-data based access to multivariate research data. By analyzing and indexing the data patterns occurring in multivariate data, one can support new techniques for content-based retrieval and exploration, well beyond meta-data based query methods. This allows information seekers to query for multivariate data-sets that exhibit patterns similar to an example data-set they provide. Furthermore, information seekers can specify one or more particular patterns they are looking for, to retrieve multivariate data-sets that contain similar patterns. To this end, I also develop visual-interactive techniques to support information seekers in formulating such queries, which inherently are more complex than textual search strings. These techniques include providing an over-view of potentially interesting patterns to search for, that interactively adapt to the user's query as it is being entered. Furthermore, based on the pattern description of each multivariate data document, I introduce a similarity measure for multivariate data. This allows scientists to quickly discover similar (or contradictory) data to their own measurements.
  • Item
    Filtering Techniques for Low-Noise Previews of Interactive Stochastic Ray Tracing
    (Schwenk, 2013-07-05) Schwenk, Karsten
    Progressive stochastic ray tracing is increasingly used in interactive applications. Examples of such applications are interactive design reviews and digital content creation. This dissertation aims at advancing this development. For one thing, two filtering techniques are presented, which can generate fast and reliable previews of global illumination solutions. For another thing, a system architecture is presented, which supports exchangeable rendering back-ends in distributed rendering systems
  • Item
    Processing Semantically EnrichedContent for Interactive 3D Visualizations
    (Settgast, 13-05-28) Settgast, Volker
    Interactive 3D graphics has become an essential tool in many fields of application: In manufacturingcompanies, e.g., new products are planned and tested digitally. The effect of newdesigns and testing of ergonomic aspects can be done with pure virtual models. Furthermore,the training of procedures on complex machines is shifted to the virtual world. In that waysupport costs for the usage of the real machine are reduced, and effective forms of trainingevaluation are possible.Virtual reality helps to preserve and study cultural heritage: Artifacts can be digitalized andpreserved in a digital library making them accessible for a larger group of people. Variousforms of analysis can be performed on the digital objects which are hardly possible to performon the real objects or would destroy them. Using virtual reality environments like large projectionwalls helps to show virtual scenes in a realistic way. The level of immersion can be furtherincreased by using stereoscopic displays and by adjusting the images to the head position ofthe observer.One challenge with virtual reality is the inconsistency in data. Moving 3D content from a usefulstate, e.g., from a repository of artifacts or from within a planning work flow to an interactivepresentation is often realized with degenerative steps of preparation. The productiveness ofPowerwalls and CAVEsTM is called into question, because the creation of interactive virtualworlds is a one way road in many cases: Data has to be reduced in order to be manageable bythe interactive renderer and to be displayed in real time on various target platforms. The impactof virtual reality can be improved by bringing back results from the virtual environment to auseful state or even better: never leave that state.With the help of semantic data throughout the whole process, it is possible to speed up thepreparation steps and to keep important information within the virtual 3D scene. The integratedsupport for semantic data enhances the virtual experience and opens new ways of presentation.At the same time the goal becomes feasible to bring back data from the presentation for examplein a CAVETM to the working process. Especially in the field of cultural heritage it isessential to store semantic data with the 3D artifacts in a sustainable way.Within this thesis new ways of handling semantic data in interactive 3D visualizations arepresented. The whole process of 3D data creation is demonstrated with regard to semanticsustainability. The basic terms, definitions and available standards for semantic markups aredescribed. Additionally, a method is given to generate semantics of higher order automatically.An important aspect is the linking of semantic information with 3D data. The thesis gives twosuggestions on how to store and publish the valuable combination of 3D content and semanticmarkup in a sustainable way.Different environments for virtual reality are compared and their special needs are pointed out.Primarily the DAVE in Graz is presented in detail, and novel ways of user interactions in suchviiviii Abstractimmersive environments are proposed. Finally applications in the field of cultural heritage, securityand mobility are presented.The presented symbiosis of 3D content and semantic information is an important contributionfor improving the usage of virtual environments in various fields of applications.
  • Item
    Cache based Optimization of Stencil Computations - An Algorithmic Approach
    (Shaheen, Mohammed, 2013-11-05) Shaheen, Mohammed
    We are witnessing a fundamental paradigm shift in computer design. Memory has beenand is becoming more hierarchical. Clock frequency is no longer crucial for performance.The on-chip core count is doubling rapidly. The quest for performance is growing. Thesefacts have lead to complex computer systems which bestow high demands on scientificcomputing problems to achieve high performance.Stencil computation is a frequent and important kernel that is affected by this complexity.Its importance stems from the wide variety of scientific and engineering applications thatuse it. The stencil kernel is a nearest-neighbor computation with low arithmetic intensity,thus it usually achieves only a tiny fraction of the peak performance when executed onmodern computer systems. Fast on-chip memory modules were introduced as the hardwareapproach to alleviate the problem.There are mainly three approaches to address the problem, cache aware, cache oblivious,and automatic loop transformation approaches. In this thesis, comprehensive cache awareand cache oblivious algorithms to optimize stencil computations on structured rectangular2D and 3D grids are presented. Our algorithms observe the challenges for high performancein the previous approaches, devise solutions for them, and carefully balance the solutionbuilding blocks against each other.The many-core systems put the scalability of memory access at stake which has lead tohierarchical main memory systems. This adds another locality challenge for performance.We tailor our frameworks to meet the new performance challenge on these architectures.Experiments are performed to evaluate the performance of our frameworks on syntheticas well as real world problems.
  • Item
    Statistical Part-based Models for Object Detection in Large 3D Scans
    (Sunkel, 2013-09-17) Sunkel, Martin
    3D scanning technology has matured to a point where very large scale acquisition of high resolution geometry has become feasible. However, having large quantities of 3D data poses new technical challenges. Many applications of practical use require an understanding of semantics of the acquired geometry. Consequently scene understanding plays a key role for many applications.This thesis is concerned with two core topics: 3D object detection and semantic alignment. We address the problem of efficiently detecting large quantities of objects in 3D scans according to object categories learned from sparse user annotation. Objects are modeled by a collection of smaller sub-parts and a graph structure representing part dependencies. The thesis introduces two novel approaches: A part-based chain structured Markov model and a general part-based full correlation model. Both models come with efficient detection schemes which allow for interactive run-times.
  • Item
    Interactive Multiresolution and Multiscale Visualization of Large Volume Data
    (Suter, 2013-03-01) Suter, Susanne.
    Interactive visualization and analysis of large and complex volume data is an ongoingchallenge. Data acquisition tools produce hundreds of Gigabytes of data andare one step ahead of visualization and analysis tools. Therefore, the amount ofdata to be rendered is typically beyond the limits of current computer and graphicshardware performance. We tackle this challenge in the context of state-of-the-artout-of-core multiresolution volume rendering systems by using a common mathematicalframework (a) to extract relevant features from these large datasets, (b) toreduce and compress the actual amount of data, and (c) to directly render/visualizethe data from the framework coefficients.This thesis includes an extended state-of-the-art analysis of data approximationapproaches and how they can be applied to interactive volume visualizationand used for feature extraction. Data is often approximated or reduced by usingcompact data representations, which require fewer coefficients than the originaldataset. In this thesis, the higher-order extension of the matrix singular value decompositionsummarized under the term tensor approximation (TA) was chosenas compact data representation. Tensor approximation consists of two parts: (1)tensor decomposition, usually an offline process, to compute the bases and coefficients,and (2) tensor reconstruction, typically a fast real-time process that invertsthe decomposition back to the original data during visualization.From these basic concepts, we derive how multiresolution volume visualizationand multiscale feature extraction are linked to the tensor approximationframework. The two axes of the TA bases were chosen as handles for multiresoluiiiivtion and multiscale visualization. The properties along the vertical axis of the TAbases match well the needs of state-of-the-art out-of-core multiresolution volumevisualization, where different levels of detail are represented by coarser or higherresolution representations of the same dataset and portions of the original datasetare loaded on demand in the desired resolution. Thus, the vertical axis of the TAbases is used for spatial selectivity and subsampling of data blocks. The horizontalaxis of the TA bases makes it possible to reconstruct the dataset at multiplefeature scales through the so-called tensor rank. Choosing only a few ranks correspondsto a low-rank approximation (many details removed) and choosing manyranks corresponds to an approximation more closely matching the original. Furthermore,a feature scale metric was developed to automatically select a featurescale and a resolution for the final reconstruction. In this scenario, the user selectsa desired feature scale for the approximated data, which is then used by the visualizationsystem to automatically define the resolution and the feature scale forthe current view on the dataset.Thanks to the compact data representation by TA, a significant data compression(15 percent of the original data elements) was achieved, which keeps the storagecosts low and boosts the interactive visualization. The interactive visualizationis moreover accelerated by using GPU-based tensor reconstruction approaches.The viability of interactive multiscale and multiresolution volume visualizationis tested with different TA volume visualization frameworks: (1) a simplebricked TA multiresolution, and (2) a TA multiresolution framework that usesglobal tensor bases. Both TA frameworks build on the vmmlib tensor classes,which were specifically developed for this thesis. For the testing, large volumedatasets from micro-computed tomography (microCT) and phase-contrast synchrotrontomography (pcST) that range up to 34 Gigabytes were acquired. Weshow visual as well as computational comparisons to state-of-the-art approachessuch as wavelet transform.We conclude by pointing out the tensor approximation framework to be a unifiedframework for interactive multiscale and multiresolution volume visualizationsystems, which directly controls data approximation in terms of feature scale andmultiple levels of detail. To wrap up, we discuss the achieved results and outlinepossible future work directions.