2010

Permanent URI for this collection


Animation Reconstruction of Deformable Surfaces

Li, Hao

Human Visual System Models in Computer Graphics

Aydin, Tunc Ozan

Measurement-based modeling and fabrication of deformable materials for human faces

Bickel, Bernd

Study of parallel techniques applied to surface reconstruction from unorganized and unoriented point clouds

Buchart Izaguirre, Carlos Ignacio

NURBS-compatible subdivision surfaces

Cashman, Thomas J.

Real-time High Quality HDR Illumination and Tonemapped Rendering

Michael, Despina

Constraint-Based Surface Processing for Geometric Modeling and Architecture

Eigensatz, Michael

Exploiting Coherence in Lighting and Shading Computations

Herzog, Robert

Reconsidering Light Transport - Acquisition and Display of Real-World Reflectance and Geometry

Hullin, Matthias

Process-Based Design of Multimedia Annotation Systems

Hofmann, Cristian Erick

Visual Analytics of Large Weighted Directed Graphs and Two-Dimensional Time-Dependent Data

Landesberger von Antburg, Tatiana

Visibility Computations for Real-Time Rendering in General 3D Environments

Mattausch, Oliver

Processing of Façade Imagery

Musialski, Przemyslaw

Visual Exploration and Analysisof Perfusion Data

Oeltze, Steffen

A Robust Approach to Interactive Virtual Cutting: Geometry and Color

Pietroni, Nico

Digital Processing and Management Tools for 2D and 3D Shape Repositories

Saleem, Waqar

A Stochastic Parallel Method for Real Time Monocular SLAM Applied to Augmented Reality

Sanchez Tapia, Jairo Roberto

Hybrid Methods for Interactive Shape Manipulation

Weber

Filament-Based Smoke

Weißmann, Steffen


Browse

Recent Submissions

Now showing 1 - 19 of 19
  • Item
    Animation Reconstruction of Deformable Surfaces
    (Li, 2010) Li, Hao
    <p> Accurate and reliable 3D digitization of dynamic shapes is a critical component in the creation of compelling CG animations. Digitizing deformable surfaces has applications ranging from robotics, biomedicine, education to interactive games and film production. Markerless 3D acquisition technologies, in the form of continuous high-resolution scan sequences, are becoming increasingly widespread and not only capture static shapes, but also entire performances. However, due to the lack of inter-frame correspondences, the potential gains offered by these systems (such as recovery of fine-scale dynamics) have yet to be tapped. The primary purpose of this dissertation is to investigate foundational algorithms and frameworks that reliably compute these correspondences in order to obtain a complete digital representation of deforming surfaces from acquired data. We further our explorations in an important subfield of computer graphics, the realistic animation of human faces, and develop a full system for realtime markerless facial tracking and expression transfer to arbitrary characters. To this end, we complement our framework with a new automatic rigging tool which offers an intuitive way for instrumenting captured facial animations. We begin our investigation by addressing the fundamental problem of non-rigid registration which establishes correspondences between incomplete scans of deforming surfaces. A robust algorithm is presented that tightly couples correspondence estimation and surface deformation within a single global optimization. With this approach, we break the dependency between both computations and achieve warps with considerably higher global spatial consistency than existing methods. We further corroborate the decisive aspects of using a non-linear space-time adaptive deformation model that maximizes local rigidity and an optimization procedure that systematically reduces stiffness.While recent advances in acquisition technology have made high-quality real-time 3D capture possible, surface regions occluded by the sensors cannot be captured. In this respect, we propose two distinct avenues for dynamic shape reconstruction. Our first approach consists of a bi-resolution framework which employs a smooth template model as a geometric and topological prior. While large-scale motions are recovered using non-rigid registration, fine-scale details are synthesized using a linear mesh deformation algorithm. We show how a detail aggregation and filtering procedure allows the transfer of persistent geometric details to regions that are not visible by the scanner. The second framework considers temporally-coherent shape completion as the primary target and skips the requirement of establishing a consistent parameterization through time. The main benefit is that the method does not require a template model and is not susceptible to error accumulations. This is because the correspondence estimations are localized within a time window.The second part of this dissertation focuses on the animation reconstruction of realistic human faces. We present a complete integrated system for live facial puppetry that enables compelling facial expression tracking with transfer to another person's face. Even with just a single rigid pose of the target face, convincing facial animations are achievable and easy to control by an actor. We accomplish real-time performance through dimensionality reduction and by carefully shifting the complexity of online computation toward offline pre-processing. To facilitate the manipulation of reconstructed facial animations, we introduce a method for generating facial blendshape rigs from a set of example poses of a CG character. The algorithm transfers controller semantics from a generic rig to the target blendshape model while solving for an optimal reproduction of the training poses. We show the advantages of phrasing the optimization in gradient space and demonstrate the performance of the system in the context of art-directable facial tracking.The performance of our methods are evaluated using two state of the art real-time acquisition systems (based on structured light and multi-view photometric stereo). </p>
  • Item
    Human Visual System Models in Computer Graphics
    (Tunc Ozan Aydin, 2011-10-11) Aydin, Tunc Ozan
    At the receiving end of visual data are humans; thus it is only natural to takeinto account various properties and limitations of the human visual system whiledesigning new image and video processing methods. In this dissertation we buildmultiple models of human vision with di?erent focuses and complexities, anddemonstrate their use in computer graphics context.The human visual system models we present perform two fundamental tasks:predicting the visual signi?cance, and the detection of visual features. We startby showing that a perception based importance measure for edge strength prediction results in qualitatively better outcomes compared to commonly used gradient magnitude measure in multiple computer graphics applications. Anothermore comprehensive model including mechanisms to simulate maladaptation isused to predict the visual signi?cance of images shown on display devices underdynamically changing lighting conditions.The detection task is investigated in the context of image and video qualityassessment. We present an extension to commonly used image quality metricsthat enables HDR support while retaining backwards compatibility with LDRcontent. We also propose a new 'dynamic range independent' image qualityassessment method that can compare HDR-LDR (and vice versa) reference-testimage pairs, in addition to image pairs with the same dynamic range. Furthermore, the design and validation of a dynamic range independent video qualityassessment method, that models various spatiotemporal aspects of human vision, is presented along with pointers to a wide range of application areas including comparison of rendering qualities, HDR compression and temporal tonemapping operator evaluation.
  • Item
    Measurement-based modeling and fabrication of deformable materials for human faces
    (Bickel, 2010) Bickel, Bernd
    <p>This thesis investigates the combination of data-driven and physically basedtechniques for acquiring, modeling, and animating deformable materials,with a special focus on human faces. Furthermore, based on these techniques,we introduce a data-driven process for designing and fabricating materialswith desired deformation behavior.</p><p>Realistic simulation behavior, surface details, and appearance are still demandingtasks. Neither pure data-driven, pure procedural, nor pure physicalmethods are best suited for accurate synthesis of facial motion and details(both for appearance and geometry), due to the difficulties in model design,parameter estimation, and desired controllability for animators. Capturingof a small but representative amount of real data, and then synthesizing diverseon-demand examples with physically-based models and real data asinput benefits from both sides: Highly realistic model behavior due to realworlddata and controllability due to physically-based models.</p><p>To model the face and its behavior, hybrid physically-based and data-drivenapproaches are elaborated. We investigate surface-based representations aswell as a solid representation based on FEM. To achieve realistic behavior, wepropose to build light-weighted data capture devices to acquire real-worlddata to estimate model parameters and to employ concepts from data-drivenmodeling techniques and machine learning. The resulting models supportsimple acquisition systems, offer techniques to process and extract modelparameters from real-world data, provide a compact representation of thefacial geometry and its motion, and allow intuitive editing. We demonstrateapplications such as capture of facial geometry and motion and real-time animationand transfer of facial details, and show that our soft tissue modelcan react to external forces and produce realistic deformations beyond facialexpressions.</p><p>Based on this model, we furthermore introduce a data-driven process for designingand fabricating materials with desired deformation behavior. Theprocess starts with measuring deformation properties of base materials. Eachmaterial is represented as a non-linear stress-strain relationship in a finiteelementmodel. For material design and fabrication, we introduce an optimizationprocess that finds the best combination of base materials that meetsa user s criteria specified by example deformations. Our algorithm employsa number of strategies to prune poor solutions from the combinatorial searchspace. We finally demonstrate the complete process by designing and fabricatingobjects with complex heterogeneous materials using modern multimaterial3D printers.</p>
  • Item
    Study of parallel techniques applied to surface reconstruction from unorganized and unoriented point clouds
    (Buchart Izaguirre, 2010-12-13) Buchart Izaguirre, Carlos Ignacio
    Nowadays, digital representations of real objects are becoming biggeras scanning processes are more accurate, so the time required for thereconstruction of the scanned models is also increasing.This thesis studies the application of parallel techniques in the surfacereconstruction problem, in order to improve the processing time required to obtain the final mesh. It is shown how local interpolating triangulationsare suitable for global reconstruction, at the time that it is possible totake advantage of the independent nature of these triangulations to design highly efficient parallel methods.A parallel surface reconstruction method is presented, based on local Delaunay triangulations. The input points do not present any additionalinformation, such as normals, nor any known structure. This method hasbeen designed to be GPU friendly, and two implementations are presented.To deal the inherent problems related to interpolating techniques (suchas noise, outliers and non-uniform distribution of points), a consolidationprocess is studied and a parallel points projection operator is presented, aswell as its implementation in the GPU. This operator is combined with thelocal triangulation method to obtain a better reconstruction.This work also studies the possibility of using dynamic reconstructiontechniques in a parallel fashion. The method proposed looks for a betterinterpretation and recovery of the shape and topology of the target model.
  • Item
    NURBS-compatible subdivision surfaces
    (Cashman, Thomas J., Mar 2010) Cashman, Thomas J.
    Two main technologies are available to design and represent freeformsurfaces: Non-Uniform Rational B-Splines (NURBS) and subdivision surfaces. Both representations are built on uniform B-splines, but they extend this foundation in incompatible ways, and different industries have therefore established a preference for one representation over the other. NURBS are the dominant standard for Computer-Aided Design, while subdivision surfaces are popular for applications in animation and entertainment. However there are benefits of subdivision surfaces (arbitrary topology) which would be useful within Computer-Aided Design, and features of NURBS (arbitrary degree and non-uniformparametrisations) which would make good additions to current subdivision surfaces.I present NURBS-compatible subdivision surfaces, which combine topological freedom with the ability to represent any existing NURBS surface exactly. Subdivision schemes that extend either non-uniform or general-degree B-spline surfaces have appeared before, but this dissertation presents the first surfaces able to handle both challenges simultaneously. To achieve this I develop a novel factorisation of knot insertion rules for non-uniform, general-degree B-splines.Many subdivision surfaces have poor second-order behaviour near singularities. I show that it is possible to bound the curvatures of the general-degree subdivision surfaces created using my factorisation. Bounded-curvature surfaces have previously been created by 'tuning' uniform low-degree subdivision schemes; this dissertation shows that general-degree schemes can be tuned in a similar way. As a result, I present the first general-degree subdivision schemes with bounded curvature at singularities.Previous subdivision schemes, both uniform and non-uniform, have inserted knots indiscriminately, but the factorised knot insertion algorithm I describe in this dissertation grants the flexibility to insert knots selectively. I exploit this flexibility to preserve convexity in highly non-uniform configurations, and to create locally uniform regions in place of non-uniform knot intervals. When coupled with bounded-curvature modifications, these techniques give the first non-uniform subdivision schemes with bounded curvature.I conclude by combining these results to present NURBS-compatible subdivision surfaces: arbitrary-topology, non-uniform and general-degree surfaces which guarantee high-quality second-order surface properties.
  • Item
    Real-time High Quality HDR Illumination and Tonemapped Rendering
    (Michael, 2010-07-10) Michael, Despina
    Real-time realistic rendering of a computer generated scene is one of the core research areas in computer graphics as it is required in several applications such as computer games, training simulators, medical and architectural packages and many other fields.The key factor of realism in the rendered images is the simulation of light transport based on the given lighting conditions. More natural results are achieved using luminance values near to the physical ones. However, the vast range of real luminances has a far greater range of values than what can be displayed on standard monitors. As a final step to the rendering process, a tonemapping operator needs to be applied in order to transform the values in the rendered image to displayable ones.Illumination of a scene is usually approximated with the rendering equation which solution is a computational expensive process. Moreover, the computational cost increases even more with the increase in the number of light sources and the number of vertices of the objects in the scene. Furthermore, in order to achieve high frame rates, current illumination algorithms compromise the quality with assumptions for several factors or assume static scenes so that they can exploit precomputations. In this thesis we propose a real-time illumination algorithm for dynamic scenes which provides high quality results and has only moderate memory requirements. The proposed algorithm is based on factorization of a new notion that we introduce: fullsphere irradiance, which allows the pre-integration of contribution of all light sources within the same value for any possible receiver. Recent illumination algorithms, including ours, usually use environment maps to represent the incident lighting in the scene. Environment maps enable natural environment lighting conditions to be used by using high dynamic range (HDR) values. Typically the HDR obtained result of the illumination needs to be tonemapped into LDR values that can be displayed on standard monitors. Traditionally tonemapped techniques give emphasis either to frame rate (global operators) or to the quality (local operators) of the resulting image. In this thesis, we propose a new framework: selective tonemapping which addresses both requirements. The key idea of this framework is to apply the expensive computations of tonemapping only to the areas of images which are regarded as important. A full rendering system has been developed which integrates HDR illumination computationand the selective tonemapping framework. Results show high quality images at real-time frame rates.
  • Item
    Constraint-Based Surface Processing for Geometric Modeling and Architecture
    (Eigensatz, Michael, 2011-03-14) Eigensatz, Michael
    This thesis investigates the application and implementation of geometric constraints to manipulate, approximate, and optimize surfaces for modeling and architecture. In modeling, geometric constraints provide an interface to edit and control the form of a surface. We present a geometry processing framework that enables constraints for positional, metric, and curvature properties anywhere on the surface of a geometric model. Target values for these properties can be specified point-wise or as integrated quantities over curves and surface patches embedded in the shape. For example, the user can draw several curves on the surface and specify desired target lengths, manipulate the normal curvature along these curves, or modify the area or principal curvature distribution of arbitrary surface patches. This user input is converted into a set of non-linear constraints. A global optimization finds the new deformed surface that best satisfies the constraints, while minimizing adaptable measures for metric and curvature distortion that provide explicit control on the deformation semantics. This approach enables flexible surface processing and shape editing operations. In architecture, the emergence of large-scale freeform shapes pose new challenges to the process from design to production. Geometric constraints directly arise from aesthetic, structural, and economical requirements for the fabrication of such structures. A key problem is the approximation of the design surface by a union of patches, so-called panels, that can be manufactured with a selected technology at reasonable cost, while meeting the design intent and achieving the desired aesthetic quality of panel layout and surface smoothness. The production of curved panels is mostly based on molds. Since the cost of mold fabrication often dominates the panel cost, there is strong incentive to use the same mold for multiple panels. Various constraints, such as the limited geometry of mold shapes and tolerances on positional and normal continuity between neighboring panels, have to be considered. We introduce a paneling algorithm that interleaves discrete and continuous optimization steps to minimize production cost while meeting the desired geometric constraints and is able to handle complex arrangements with thousands of panels. The practical relevance of our system is demonstrated by paneling solutions for real, cutting-edge architectural freeform design projects.
  • Item
    Exploiting Coherence in Lighting and Shading Computations
    (Herzog, Robert, 2010-07-01) Herzog, Robert
    Computing global illumination (GI) in virtual scenes becomes increasingly attractive even for real-time applications nowadays. GI delivers important cues inthe perception of 3D virtual scenes, which is important for material and architectural design. Therefore, for photo-realistic rendering in the design and even thegame industry, GI has become indispensable. While the computer simulation ofrealistic global lighting is well-studied and often considered as solved, computingit efficiently is not. Saving computation costs is therefore the main motivationof current research in GI. Efficient algorithms have to take various aspects intoaccount, such as the algorithmic complexity and convergence, its mapping toparallel processing hardware, and the knowledge of certain lighting propertiesincluding the capabilities of the human visual system.In this dissertation we exploit both low-level and high-level coherence in thepractical design of GI algorithms for a variety of target applications rangingfrom high-quality production rendering to dynamic real-time rendering. We alsofocus on automatic rendering-accuracy control to approximate GI in such a waythat the error is perceptually unified in the result images, thereby taking notonly into account the limitations of the human visual system but also later videocompression with an MPEG encoder. In addition, this dissertation provides manyideas and supplementary material, which complements published work and couldbe of practical relevance.
  • Item
    Reconsidering Light Transport - Acquisition and Display of Real-World Reflectance and Geometry
    (Matthias Hullin, 2010-12-15) Hullin, Matthias
    In this thesis, we cover three scenarios that violate common simplifying assumptions about the nature of light transport. We begin with the first ingredient to any 3D rendering: a geometry model. Most 3D scanners require the object-of-interest to show diffuse reflectance. The further a material deviatesfrom the Lambertian model, the more likely these setups are to produce corruptedresults. By placing a traditional laser scanning setup in a participating (in particular, fluorescent) medium, we have built a light sheet scanner that delivers robust results for a wide range of materials, including glass.Further investigating the phenomenon of fluorescence, we notice that, despite its ubiquity, it has received moderate attention in computer graphics. In particular, to date no datadriven reflectance models of fluorescent materials have been available. To describe the wavelength-shifting reflectance of fluorescent materials, we define the bispectral bidirectional reflectance and reradiation distribution function (BRRDF), for which we introduce an image-based measurement setup as well as an efficient acquisition scheme.Finally, we envision a computer display that shows materials instead of colours, and present a prototypical device that can exhibit anisotropic reflectance distributions similar to common models in computer graphics.3D scanning, gonioreflectrometry, goniofluorometry, fluorescence, reflectance and reradiation, BRDF display
  • Item
    Process-Based Design of Multimedia Annotation Systems
    (Hofmann, 06.12.2010) Hofmann, Cristian Erick
    Annotation of digital multimedia comprises a range of different application scenarios,supported media and annotation formats, and involved techniques. Accordingly, recentannotation environments provide numerous functions and editing options. This resultsin complexly designed user interfaces, so that human operators are disoriented withrespect to task procedures and the selection of accurate tools.In this thesis we contribute to the operability of multimedia annotation systems in severalnovel ways. We introduce concepts to support annotation processes, at whichprinciples of Workflow Management are transferred. Particularly focusing on the behaviorof graphical user interface components, we achieve a significant decrease ofuser disorientation and processing times. In three initial studies, we investigate multimedia annotation from two differentperspectives. A Feature-oriented Analysis of Annotation Systems describesapplied techniques and forms of processed data. Moreover, a conducted EmpiricalStudy and Literature Survey elucidate different practices of annotation,considering case examples and proposed workflow models. Based on the results of the preliminary studies, we establish a Generic ProcessModel of Multimedia Annotation, summarizing identified sub-processes andtasks, their sequential procedures, applied services, as well as involved data formats. By a transfer into a Formal Process Specification we define information entitiesand their interrelations, constituting a basis for workflow modeling, and declaringtypes of data which need to be managed and processed by the technicalsystem. We propose a Reference Architecture Model, which elucidates the structure andbehavior of a process-based annotation system, also specifying interactions andinterfaces between different integrated components. As central contribution of this thesis, we introduce a concept for Process-drivenUser Assistance. This implies visual and interactive access to a given workflow,representation of the workflow progress, and status-dependent invocationof tools.We present results from a User Study conducted by means of the so-called SemAnnotframework. We implemented this novel framework based on our considerationsmentioned above. In this study we show that the application of our proposed conceptfor process-driven user assistance leads to strongly significant improvements ofthe operability of multimedia annotation systems. These improvements are associatedwith the partial aspects efficiency, learnability, usability, process overview, and usersatisfaction.
  • Item
    Visual Analytics of Large Weighted Directed Graphs and Two-Dimensional Time-Dependent Data
    (Landesberger von Antburg, June, 2010) Landesberger von Antburg, Tatiana
    Die Analyse großer Datenmengen ist in vielen Anwendungsgebieten eine wichtige Aufgabe. Dazu zählen zum einen die Biologie, Pharmazie und Verkehrsplanung als auch Sozial- und Wirtschaftswissenschaften, um nur einige Beispiele zu nennen. Diese Gebiete sind auf eine effektive und schnelle Analyse angewiesen, um zeitnah Entscheidungen treffen zu können. In dieser Arbeit präsentieren wir neue Techniken, die diese flexibel integrierten Kombinationen mit enger Einbeziehung des Nutzers in den analytischen Prozess für zwei ausgewählten Datentypen unterstützen. Für die visuelle Analyse gerichteter, gewichteter Graphen wurden auf den Datentyp und ausgewählte Nutzungsszenarien spezialisierte Techniken entwickelt. Der Beitrag beinhaltet: 1. Verbesserung der Analyse der Beziehung zwischen Graphknoten. Durch die interaktive Integration graphalgoritmischer Analyse- und Visualisierungsmethoden wird eine einfache und effektive Untersuchung der Zusammenhänge gewährleistet. 2. Erweiterung der visuellen, interaktiven Analyse von Graphmotiven, die vordefinierte und interaktiv durch den Benutzer definierte Strukturen untersucht. Auf Basis dieser Motivdaten kann eine hierarchische Aggregation der Daten erfolgen, um verschiedene Abstraktionsebenen zu erzeugen. Weiterhin wird die Motivanalyse zur Auswertung struktureller Graphänderungen (z.B. benutzerdefinierte
  • Item
    Visibility Computations for Real-Time Rendering in General 3D Environments
    (Mattausch, 27.4.2010) Mattausch, Oliver
    Visibility computations are essential operations in computer graphics, which are required for rendering acceleration in the form of visibility culling, as well as for computing realistic lighting. Visibility culling, which is the main focus of this thesis, aims to provide output sensitivity by sending only visible primitives to the hardware. Regardless of the rapid development of graphics hardware, it is of crucial importance for many applications like game development or architectural design, as the demands on the hardware regarding scene complexity increase accordingly. Solving the visibility problem has been an important research topic for many years, and countless methods have been proposed. Interestingly, there are still open research problems up to this day, and many algorithms are either impractical or only usable for specific scene configurations, preventing their widespread use. Visibility culling algorithms can be separated into algorithms for visibility preprocessing and online occlusion culling. Visibility computations are also required to solve complex lighting interactions in the scene, ranging from soft and hard shadows to ambient occlusion and full fledged global illumination. It is a big challenge to answer hundreds or thousands of visibility queries within a fraction of a second in order to reach real-time frame rates, which is one goal that we want to achieve in this thesis. The contribution of this thesis are four novel algorithms that provide solutions for efficient visibility interactions in order to achieve high-quality output-sensitive real-time rendering, and are general in the sense that they work with any kind of 3D scene configuration. First we present two methods dealing with the issue of automatically partitioning view space and object space into useful entities that are optimal for the subsequent visibility computations. Amazingly, this problem area was mostly ignored despite its importance, and view cells are mostly tweaked by hand in practice in order to reach optimal performance a very time consuming task. The first algorithm specifically deals with the creation of an optimal view space partition into view cells using a cost heuristics and sparse visibility sampling. The second algorithm extends this approach to optimize both view space subdivision and object space subdivision simultaneously. Next we present a hierarchical online culling algorithm that eliminates most limitations of previous approaches, and is rendering engine friendly in the sense that it allows easy integration and efficient material sorting. It reduces the main problem of previous algorithms the overhead due to many costly state changes and redundant hardware occlusion queries to a minimum, obtaining up to three times speedup over previous work. At last we present an ambient occlusion algorithm which works in screen space, and show that high-quality shading with effectively hundreds of samples per pixel is possible in real time for both static and dynamic scenes by utilizing temporal coherence to reuse samples from previous frames.
  • Item
    Processing of Façade Imagery
    (Musialski, 2010) Musialski, Przemyslaw
    Modeling and reconstruction of urban environments is currently the subject of intensive research. There is a wide range of possible applications, including virtual environments like cyber-tourism, computer games, and the entertainment industries in general, as well as urban planning and architecture, security planning and training, traffic simulation, driving guidance and telecommunications, to name but a few. The research directions are spread across the disciplines of computer vision, computer graphics, image processing, photogrammetry and remote sensing, as well as architecture and the geosciences. Reconstruction is a complex problem and requires an entire pipeline of different tasks. In this thesis we focus on processing of images of fa{\c c}ades which is one specific subarea of urban reconstruction. The goal of our research is to provide novel algorithmic solutions for problems in fa{\c c}ade imagery processing. In particular, the contribution of this thesis is the following: First, we introduce a system for generation of approximate orthogonal fa{\c c}ade images. The method is a combination of automatic and interactive tools in order to provide a convenient way to generate high-quality results. The second problem addressed in this thesis is fa{\c c}ade image segmentation. In particular, usually by segmentation we mean the subdivision of the fa{\c c}ade into windows and other architectural elements. We address this topic with two different algorithms for detection of grids over the fa{\c c}ade image. Finally, we introduce one more fa{\c c}ade processing algorithm, this time with the goal to improve the quality of the fa{\c c}ade appearance. The algorithm propagates visual information across the image in order to remove potential obstacles and occluding objects. The output is intended as source for textures in urban reconstruction projects. The construction of large three-dimensional urban environments itself is beyond the scope of this thesis. However, we propose a suite of tools together with mathematical foundations that contribute to the state-of-the-art and provide helpful building blocks important for large scale urban reconstruction projects.
  • Item
    Visual Exploration and Analysisof Perfusion Data
    (Oeltze, 2010-08-25) Oeltze, Steffen
    Perfusionsdaten sind dynamische medizinische Bilddaten, welche den regionalen Blutfluss in Gewebe charakterisieren. Sie besitzen ein großes Potential in der medizinischen Diagnose, da sie, verglichen mit statischen Daten, eine bessere Differenzierung und eine frühere Erkennung von Krankheiten ermöglichen. Diese Dissertation konzentriert sich auf Perfusionsdaten, welche mit Hilfe der Magnet-Resonanz-Tomographie (MRT) akquiriert wurden und auf deren Analyse in der Diagnostik des ischämischen Schlaganfalls und der Früherkennung und Diagnostik der Koronaren Herzkrankheit (KHK). An passender Stelle werden Beispiele aus der Brustkrebsdiagnostik hinzugezogen, um die Flexibilität der zur visuellen Exploration und Analyse entwickelten Techniken zu illustrieren. Die Übertragbarkeit auf weitere Anwendungsgebiete der dynamischen Bildgebung und auf andere Bildgebungsmodalitäten neben MR werden am Ende der Dissertation skizziert. Sogenannte Zeit-Intensitätskurven spezifizieren die Anreicherung eines Kontrastmittels für die Voxel in einem Perfusionsdatensatz. Parameter, welche von diesen Kurven abgeleitet werden, charakterisieren die Perfusion und müssen für die Diagnose integriert werden. Die diagnostische Auswertung solcher Multiparameter Daten ist anspruchsvoll und zeitintensiv aufgrund der Komplexität der Daten. In der klinischen Routine basiert die Auswertung auf einzelnen Parameterkarten, welche nebeneinander auf dem Bildschirm dargestellt werden. Die Interpretation einer derartigen Ansicht erfordert einen erheblichen kognitiven Aufwand, da der Arzt die einzelnen Karten immer wieder abwechselnd betrachten muss, um korrespondierende Regionen zu vergleichen. Fortgeschrittene Visualisierungstechniken sind daher notwendig, um eine integrierte Ansicht mehrerer Parameter zu generieren und dadurch die Auswertung zu beschleunigen. In dieser Dissertation werden Multiparameter Visualisierungen basierend auf Farbe, Textur und Glyphen für die integrierte Visualisierung mehrerer Perfusionsparameter vorgestellt. Das Aufnahmeprotokoll für die MR-basierte Akquisition von Perfusionsdaten umfasst häufig die Aufnahme weiterer Bilddaten, welche unterschiedliche klinische Aspekte beschreiben. Zusammen vermitteln die Daten ein globales Bild des Patientenstatus. Die Diagnostik der KHK ist ein prominentes Beispiel. Sie umfasst sowohl Aufnahmen welche die Anatomie des Herzens und der Herzkranzgefäße charakterisieren, als auch Aufnahmen welche eine Beurteilung der Perfusion, Viabilität und Funktion des Myokards (Herzmuskel) gestatten. Innerhalb der Dissertation wird eine Glyphen-basierte 3D Visualisierung der Myokardperfusion vorgestellt. Diese ist in den anatomischen Kontext eingebettet und wird durch Informationen angereichert, welche die Viabilität und Funktion des Myokards beschreiben. Die rein visuelle Exploration von Perfusionsdaten und den zugehörigen Perfusionsparametern ist vorherrschend im engen Zeitplan der klinischen Routine. Sie ist jedoch eine vom Betrachter abhängige und kaum reproduzierbare Aufgabe, die keine quantitativen Ergebnisse liefert. Für eine geordnete und reproduzierbare Analyse von Perfusionsdaten ist eine Kombination aus visueller Exploration und aus Techniken der Datenanalyse notwendig. Die Dissertation trägt dazu einen interaktiven, merkmalsbasierten Ansatz für die geordnete visuelle Analyse von Perfusionsdaten bei. Dieser Ansatz stützt sich auf drei Komponenten, für die Vorverarbeitung der Daten, für eine statistische Analyse und für die Spezifikation von Merkmalen. Die Durchführbarkeit des Ansatzes wurde für mehrere Datensätze aus der Diagnostik des ischämischen Schlaganfalls, der KHK-Diagnostik und der Brustkrebsdiagnostik erfolgreich getestet. Weiterhin konnte sein Nutzen bei der Beantwortung wichtiger investigativer Fragen in der Perfusionsforschung am Beispiel des Vergleichs von daten- und modellbasierter Auswertung zerebraler Perfusion demonstriert werden. - Perfusion data are dynamic medical image image data which characterize the regional blood flow in tissue. These data bear a great potential in medical diagnosis, since diseases can be better distinguished and detected at an earlier stage compared to static image data. The thesis at hand focuses on Magnetic Resonance (MR) perfusion data and their analysis in ischemic stroke diagnosis and in the early detection and diagnosis of Coronary Heart Disease (CHD). When appropriate, examples from breast tumor diagnosis are consulted to illustrate the flexibility of the developed visual exploration and analysis techniques. The transferability to further application fields of dynamic imaging and to imaging modalities other than MR are outlined at the end of the thesis. For each voxel in a perfusion dataset, a time-intensity curve specifies the accumulation and washout of a contrast agent. Parameters derived from these curves characterize the perfusion and have to be integrated for diagnosis. The diagnostic evaluation of this multiparameter data is challenging and time-consuming due to its complexity. In clinical routine, the evaluation is based on a side-by-side display of single-parameter visualizations whose interpretation demands a considerable cognitive effort to scan back and forth for comparing corresponding regions. Hence, sophisticated visualization techniques are required that generate an integrated display of several parameters thereby accelerating the evaluation. In this thesis, color-, texture- and glyph-based multiparameter visualizations for the integrated display of several perfusion parameters are presented. MR perfusion data are often acquired in a scanning protocol together with other image data describing different clinical aspects. Together, the data contribute to a global picture of the patient state. CHD diagnosis is a prominent example including scans that characterize the anatomy of the heart and the great vessels as well as scans depicting the perfusion, viability, and function of the myocardium (heart muscle). The thesis at hand introduces a 3D glyph-based visualization of myocardial perfusion which is embedded in the anatomical context of the myocardium and enhanced by adding viability and functional information. The pure visual exploration of perfusion data and associated perfusion parameters is the prevailing method in the tight schedule of clinical routine. However, it is an observerdependent and barely reproducible task delivering no quantitative results. An approach is required that merges visual exploration and data analysis techniques into visual analysis for a streamlined investigation of perfusion. The thesis contributes an interactive feature-based approach for the streamlined visual analysis of perfusion data which comprises components for data preprocessing, statistical analysis, and feature specification. The approach is applied to several datasets from ischemic stroke, CHD, and breast tumor diagnosis for a proof of concept. Furthermore, its benefit in answering crucial investigative questions in perfusion research is demonstrated by comparing data-near and model-near cerebral perfusion assessment.
  • Item
    A Robust Approach to Interactive Virtual Cutting: Geometry and Color
    (Pietroni, 2010-05-08) Pietroni, Nico
    Interactive simulation of deformable bodies has attracted growing interest in the course of the last decade and, while for a long time it has been limited to applicative domains such as virtual surgery, it is nowadays a fundamental part of almost every game engine. The reasons of this evolution may be found both in the continuous effort of the scientific community and in the technological improvement of computers performance that allowed to sustain such a calculation-intensive task even on commodity computers.The simulation of a deforming object requires a physical model of the object behavior and an efficient and stable algorithm to simulate it.Generally speaking, the physical model must consider the phenomenon at the right scale (e.g. a ball will not be modeled as the interaction of its atoms) and capture the aspects of the simulation we are interested in (e.g. do not include the temperature when computing the bouncing of the ball). Concerning the algorithm, it must be able to update the state of the system in real-time and it must be stable. The latter is particularly critical because simulation includes resolution of Partial Differential Equations (PDEs) which easily could easily diverge if not handled with care.Although many consolidated results in this field exist, there are still problems that need further investigation, for example how to model the cutting (or fracturing) of deformable objects.A cut on a deformable object has two major implications: it changes its boundary by adding a new portion of surface ( the part that is revealed by the cut) which means that the geometric description must be updated on-the-fly; new information (e.g. the color) is needed to render the newly generated surface portion; finally, it changes the physical behavior of the object, which translates in updating the boundary conditions of the physical model.The contribution of this thesis to the problem stated above is twofold:A new algorithm to model interactive cuts or fractures on deforming objects, named Splitting Cubes. The Splitting Cubes can be considered as a tessellation algorithm for deformable surfaces. It is independent from the underlying physical model which defines the deformation functions. For the particular case of mesh-free methods for the physical simulation, we also describe a practical GPU-friendly method to introduce discontinuities of the deformation, the Extended Visibility criterion.Due to its stability and efficiency, the Splitting Cubes is particularly suitable for interactive simulations, including virtual surgery and games.A new algorithm to derive the color of the interior of an object from few cross sections. To address this problem we propose a new appearance-modeling paradigm for synthesizing the internal structure of a 3D model from photographs of a few cross-sections of a real object. In our approach colors attributes (textures) of the surface are synthesized on demand during the simulation. We will demonstrate that our modeling paradigm reveal highly realistic internal surfaces in a variety of artistic flavors. Due to its efficiency, our approach is suitable for real-time simulations.We finally present two collateral results that emerged during the research carried out in these years: a robust model for real-time simulation of knot-tying which is certainly useful in endoscopic surgical simulator; a technique for building a virtual model of a human head, developed in the framework of the approximation of individual Head Related Transfer Functions (HRTF) for the realistic binaural rendering of three-dimensional sound.
  • Item
    Digital Processing and Management Tools for 2D and 3D Shape Repositories
    (Saleem, Waqar, 2010-06-18) Saleem, Waqar
    This thesis presents work on several aspects of 3D shape processing. We developa learning based surface reconstruction algorithm that is robust to typical inputartifacts and alleviates the restrictions imposed by previous such methods. Usingthe human shape perception motivated paradigm of representing a 3D shape byits 2D views obtained from its view sphere, we compute the shape's
  • Item
    A Stochastic Parallel Method for Real Time Monocular SLAM Applied to Augmented Reality
    (Sanchez Tapia, 2010-12-10) Sanchez Tapia, Jairo Roberto
    In augmented reality applications, the position and orientation of the observermust be estimated in order to create a virtual camera that renders virtual objectsaligned with the real scene. There are a wide variety of motion sensors availablein the market, however, these sensors are usually expensive and impractical. Incontrast, computer vision techniques can be used to estimate the camera poseusing only the images provided by a single camera if the 3D structure of thecaptured scene is known beforehand. When it is unknown, some solutions useexternal markers, however, they require to modify the scene, which is not alwayspossible. </p><p>Simultaneous Localization and Mapping (SLAM) techniques can deal withcompletely unknown scenes, simultaneously estimating the camera pose and the3D structure. Traditionally, this problem is solved using nonlinear minimizationtechniques that are very accurate but hardly used in real time. In this way, thisthesis presents a highly parallelizable random sampling approach based on MonteCarlo simulations that fits very well on the graphics hardware. As demonstratedin the text, the proposed algorithm achieves the same precision as nonlinearoptimization, getting real time performance running on commodity graphicshardware. </p><p>Along this document, the details of the proposed SLAM algorithm areanalyzed as well as its implementation in a GPU. Moreover, an overview of theexisting techniques is done, comparing the proposed method with the traditionalapproach.
  • Item
    Hybrid Methods for Interactive Shape Manipulation
    (Weber, 2010-09-12) Weber
    Manipulating 2D and 3D shapes interactively is an important task in computer graphics. The challenge is to be able to induce a global change to the shape while preserving its local structure. A deformation tool accepts as an input a source shape as well as some user specified constraints that can be manipulated interactively. The amount of constraints provided by the user should be kept to minimum in order to make the tool intuitive to control. The required output is a shape that satisfies the imposed constraints, yet strives to preserve the character and the fine geometric details of the original shape.</p><p>In this work, we explore several techniques to achieve detail preserving shape deformation. We first provide an algorithm that combines an intrinsic representation of a surface (using differential coordinates) with a data-driven approach. The realism of the deformation is increased by incorporating example shapes that put the deformation into context, demonstrating characteristic deformations of the shape, such as bulging of muscles and appearance of folds for human or animal shapes.</p><p>We then turn over to a different approach which is fundamentally a space deformation technique. Space deformation deforms the ambient space rather than directly deforming the object and any object that is embedded in that space deforms accordingly as a byproduct. The main advantage of space deformation is that it is not limited to a particular geometric representation such as triangle meshes. A popular way to perform space deformation is to use barycentric coordinates, however, deformation with barycentric coordinates essentially destroys fine details of the shape. We extend the notion of barycentric coordinates in two dimensions to complex numbers. This generalization results in a hybrid approach that provides us the ability to obtain shape preservation with a space deformation framework.</p>
  • Item
    Filament-Based Smoke
    (Weißmann, 15. 9. 2010) Weißmann, Steffen
    This cumulative dissertation presents a complete model for simulating smoke usingpolygonal vortex filaments. Based on a Hamiltonian system for the dynamics ofsmooth vortex filaments, we develop an effcient and robust algorithm that allowssimulations in real time. The discrete smoke ring ow allows to use coarse polygonalvortex filaments, while preserving the qualitative behavior of the smooth system. Themethod handles rigidly moving obstacles as boundary conditions and simulates vortexshedding. Obstacles as well as shed vorticity are also represented as polygonal filaments.Variational vortex reconnection prevents the exponential increase of filamentlength over time, without significant modification of the uid velocity field. Thisallows for simulations over extended periods of time. The algorithm reproduces variousreal experiments (colliding vortex rings, wakes) that are challenging for classicalmethods.