2009

Permanent URI for this collection


Real-time Rendering and Animation of Vegetation

Habel, Ralf

Inverse Tone Mapping

Banterle, Francesco

GPU-based Multi-Volume Rendering of Complex Data in Neuroscience and Neurosurgery

Beyer, Johanna

Audio and Visual Rendering with Perceptual Foundations

Bonneel, Nicolas

Manipulations d'image expressives pour une variété de représentations visuelles - Expressive Image Manipulations for a Variety of Visual Representations

Bousseau, Adrien

Capturing and Reconstructing the Appearance of Complex 3D Scenes

Fuchs, Christian

Uses of uncalibrated images to enrich3D models information

Dellepiane, Matteo

SELF-DELAUNAY MESHES FOR SURFACES

Dyer, Ramsay

Filtering and Optimization Strategies for Markerless Human Motion Capture with Skeleton-Based Shape Models

Gall, Juergen

Techniques for Stochastic ImplicitSurface Modelling and Rendering

Gamito, Manuel Noronha

Analysis and Visualization of Industrial CT Data

Heinzl, Christoph

Reconstruction and Analysis of Shapes from 3D Scans

Haar, Frank B. ter

LiveSync: Smart Linking of 2D and 3D Views in Medical Applications

Kohlmann, Peter

Feature Centric Volume Visualization

Malik, Muhammad Muddassir

Comprehensive Visualization of Cardiac MRI Data

Termeer, Maurice Alain

High Quality Dynamic Reflectance and Surface Reconstruction from Video

Ahmed, Naveed

Expressive Visualization and Rapid Interpretation of Seismic Volumes

Patel, Daniel

Semantic Visualization Mapping for Volume Illustration

Rautek, Peter

Perceptually-motivated, Interactive Rendering and Editing of Global Illumination

Ritschel, Tobias

Robust and Efficient Processing Techniques for Staticand Dynamic Geometric Data

Schall, Oliver

Feature Extraction for Visual Analysis of DW-MRI Data

Schultz, Thomas

Applications of temporal coherence in real-time rendering

Scherzer, Daniel

Template based shape processing

Stoll, Carsten

On Visualization and Reconstruction from Non-uniform Point Sets

Vuçini, Erald

Anatomical Modeling for Image Analysis in Cardiology

Zambal, Sebastian


Browse

Recent Submissions

Now showing 1 - 25 of 25
  • Item
    Real-time Rendering and Animation of Vegetation
    (Habel, Jan 2009) Habel, Ralf
    Vegetationsdarstellung und Animation in Echtzeitapplikationen stellen immernoch ein grosses Problem aufgrund der inhärenten Komplexität vonPanzen dar. Sowohl die geometrische Komplexität als auch der aufwändigeLichttransport erfordern spezialisierte Techniken um eine hochqualitativeDarstellung von Vegetation in Echtzeit zu erreichen. Diese Doktorarbeitpräsentiert neue Algorithmen die unterschiedliche Bereiche von Vegetationsdarstellungund Animation bearbeiten.Um Gras darzustellen wird ein effzienter Algorithmus f&ü;r kurzes unddichtes Gras eingeführt. Im Gegensatz zu vorherigen Algorithmen ist dieseneue Herangehensweise strahlenbasiert um die massive überzeichnung vonBillboard- oder explizite Geometrierepräsentationstechniken zu verhindern.Damit wird eine Unabhängigkeit von der Graskomplexität erreicht, ohne dieCharakteristiken von Gras wie Parallax und Verdeckung zu verlieren.Zusätzlich wird eine Methode für effzientes Darstellen von Blättern eingeführt. Blätter besitzen ein komplexes Lichttransportverhalten und es wirdvor allem auf die Lichtdurchlässigkeit, ein integraler Bestandteil von Blattschattierung,geachtet. Der Lichttransport durch ein Blatt wird vorberechnetund kann leicht zur Laufzeit ausgewertet werden. Dies ermöglicht die Schattierungeiner grossen Anzahl an Blättern, einschliesslich Effekten die durch die Blattstruktur entstehen wie variierende Reektivität, Dicke oder Selbstabschattung.Um einen Baum zu animieren wird eine neue Deformationsmethode auf Basis eines Strukturmechanikmodells, das alle wichtigen physikalischen Eigenschaftenvon ästen miteinbezieht. Dieses Modell erfordert nicht die Segmentierungdurch Joints wie vorhergehende Methoden, wodurch eine weicheund akkurate Biegung ermöglicht wird, die vollständig auf der GPU ausgeführt werden kann. Um diese Deformation anzutreiben wird eine spektraleHerangehensweise benutzt die ebenfalls die physikalischen Eigenschaften vonästen benutzt. Diese Technik erlaubt es hochdetailierte Bäume mit tausendenästen und zehntausenden Blättern effzient zu animieren.Desweiteren wird eine Methode eingeführt, die eine effziente Nutzungvon dynamischen Himmelslichtmodellen mit Spherical Harmonics PrecomputedRadiance Transfer Techniken ermöglicht. Sie erlaubt das Verändernder Parameter in Echtzeit ohne nennenswerten Rechenaufwand und Speicherverbrauch. - Vegetation rendering and animation in real-time applications still pose a significant problem due to the inherent complexity of plants. Both the highgeometric complexity and intricate light transport require specialized techniquesto achieve high-quality rendering of vegetation in real time. Thisthesis presents new algorithms that address various areas of both vegetationrendering and animation.For grass rendering, an effcient algorithm to display dense and short grassis introduced. In contrast to previous methods, the new approach is based onray tracing to avoid the massive overdraw of billboard or explicit geometryrepresentation techniques, achieving independence of the complexity of thegrass without losing the visual characteristics of grass such as parallax andocclusion effects as the viewpoint moves.Also, a method to efficiently render leaves is introduced. Leaves exhibita complex light transport behavior due to subsurface scattering and specialattention is given to the translucency of leaves, an integral part of leaf shading.The light transport through a leaf is precomputed and can be easilyevaluated at runtime, making it possible to shade a massive amount of leaveswhile including the effects that occur due to the leaf structure such as varyingalbedo and thickness variations or self shadowing.To animate a tree, a novel deformation method based on a structural mechanicsmodel that incorporates the important physical properties of branchesis introduced. This model does not require the branches to be segmented byjoints as other methods, achieving smooth and accurate bending, and can beexecuted fully on a GPU. To drive this deformation, an optimized spectralapproach that also incorporates the physical properties of branches is used.This allows animating a highly detailed tree with thousands of branches andten thousands of leaves efficiently.Additionally, a method to use dynamic skylight models in spherical harmonicsprecomputed radiance transfer techniques is introduced, allowing tochange the skylight parameters in real time at no considerable cost and memoryfootprint.
  • Item
    Inverse Tone Mapping
    (Banterle, 2009-06-04) Banterle, Francesco
    The introduction of High Dynamic Range Imaging in computer graphics has produced a novelty in Imaging that can be compared to the introduction of colour photography or even more. Light can now be captured, stored, processed, and finally visualised without losing information. Moreover, new applications that can exploit physical values of the light have been introduced such as re-lighting of synthetic/real objects, or enhanced visualisation of scenes. However, these new processing and visualisation techniques cannot be applied to movies and pictures that have been produced by photography and cinematography in more than one hundred years. This thesis introduces a general framework for expanding legacy content into High Dynamic Range content. The expansion is achieved avoiding artefacts, producing images suitable for visualisation and re-lighting of synthetic/real objects. Moreover, it is presented a methodology based on psychophysical experiments and computational metrics to measure performances of expansion algorithms. Finally, a compression scheme, inspired by the framework, for High Dynamic Range Textures, is proposed and evaluated.
  • Item
    GPU-based Multi-Volume Rendering of Complex Data in Neuroscience and Neurosurgery
    (Beyer, 27.11.2009) Beyer, Johanna
    Recent advances in image acquisition technology and its availability in the medical and bio-medical fields have lead to an unprecedented amount of high-resolution imaging data. However, the inherent complexity of this data, caused by its tremendous size, complex structure or multi-modality poses several challenges for current visualization tools. Recent developments in graphics hardware architecture have increased the versatility and processing power of today's GPUs to the point where GPUs can be considered parallel scientific computing devices. The work in this thesis builds on the current progress in image acquisition techniques and graphics hardware architecture to develop novel 3D visualization methods for the fields of neurosurgery and neuroscience. The first part of this thesis presents an application and framework for planning of neurosurgical interventions. Concurrent GPU-based multi-volume rendering is used to visualize multiple radiological imaging modalities, delineating the patient's anatomy, neurological function, and metabolic processes. Additionally, novel interaction metaphors are introduced, allowing the surgeon to plan and simulate the surgial approach to the brain based on the individual patient anatomy. The second part of this thesis focuses on GPU-based volume rendering techniques for large and complex EM data, as required in the field of neuroscience. A new mixed-resolution volume ray-casting approach is presented, which circumvents artifacts at block boundaries of different resolutions. NeuroTrace is introduced, an application for interactive segmentation and visualization of neural processes in EM data. EM data is extremely dense, heavily textured and exhibits a complex structure of interconnected nerve cells, making it difficult to achieve high-quality volume renderings. Therefore, this thesis presents a novel on-demand nonlinear noise removal and edge detection method which allows to enhance important structures (e.g., myelinated axons) while de-emphasizing less important regions of the data. In addition to the methods and concepts described above, this thesis tries to bridge the gap between state-of-the-art visualization research and the use of those visualization methods in actual medical and bio-medical applications.
  • Item
    Audio and Visual Rendering with Perceptual Foundations
    (Bonneel, 2009-09-15) Bonneel, Nicolas
    Realistic visual and audio rendering still remains a technical challenge. Indeed, typicalcomputers do not cope with the increasing complexity of today's virtual environments, bothfor audio and visuals, and the graphic design of such scenes require talented artists.In the first part of this thesis, we focus on audiovisual rendering algorithms for complexvirtual environments which we improve using human perception of combined audioand visual cues. In particular, we developed a full perceptual audiovisual rendering engineintegrating an efficient impact sounds rendering improved by using our perception ofaudiovisual simultaneity, a way to cluster sound sources using human's spatial tolerancebetween a sound and its visual representation, and a combined level of detail mechanismfor both audio and visuals varying the impact sounds quality and the visually rendered materialquality of the objects. All our crossmodal effects were supported by the prior workin neuroscience and demonstrated using our own experiments in virtual environments.In a second part, we use information present in photographs in order to guide a visualrendering. We thus provide two different tools to assist casual artists such as gamers, orengineers. The first extracts the visual hair appearance from a photograph thus allowingthe rapid customization of avatars in virtual environments. The second allows for a fastpreviewing of 3D scenes reproducing the appearance of an input photograph following auser's 3D sketch.We thus propose a first step toward crossmodal audiovisual rendering algorithms anddevelop practical tools for non expert users to create virtual worlds using photograph'sappearance.
  • Item
    Manipulations d'image expressives pour une variété de représentations visuelles - Expressive Image Manipulations for a Variety of Visual Representations
    (Bousseau, 2009-10-15) Bousseau, Adrien
    La communication visuelle tire profit de la grande variété d'apparences qu'une image peut avoir. En ignorant les détails, les images simplifiées concentrent l'attention de l'observateur sur le contenu essentiel à transmettre. Les images stylisées, qui diffèrent de la réalité, peuvent suggérer une information subjective ou imaginaire. Des variations plus subtiles, comme le changement de l'éclairage dans une photographie, ont également un impact direct sur la façon dont le message transmis va être interprété. Le but de cette thèse est de permettre à un utilisateur de manipuler le contenu visuel et créer des images qui correspondent au message qu'il cherche à transmettre. Nous proposons plusieurs manipulations qui modifient, simplifient ou stylisent des images pour augmenter leur pouvoir d'expression. Nous présentons d'abord deux méthodes pour enlever les détails d'une photographie ou d'une vidéo. Le résultat de cette simplification met en valeur les structures importantes de l'image. Nous introduisons ensuite une nouvelle primitive vectorielle, nommée Courbe de Diffusion, qui facilite la création de dégradés de couleurs et de flou dans des images vectorielles. Les images créées avec des courbes de diffusion présentent des effets complexes qui sont difficiles à reproduire avec les outils vectoriels existants. Dans une seconde partie, nous proposons deux algorithmes pour la création d'animations stylisées à partir de vidéos et de scènes 3D. Ces deux méthodes produisent des animations qui ont l'apparence 2D de média traditionnels comme l'aquarelle. Nous décrivons enfin une approche pour décomposer l'information d'illumination et de réflectance dans une photographie. Nous utilisons des indications utilisateurs pour résoudre ce problème sous-contraint. Les différentes manipulations d'image proposées dans ce mémoire facilitent la création d'une variété de représentations visuelles, comme illustré par nos résultats. - Visual communication greatly benefits from the large variety of appearances that an image can take. By neglecting spurious details, simplified images focus the attention of an observer on the essential message to transmit. Stylized images, that depart from reality, can suggest subjective or imaginary information. More subtle variations, such as change of lighting in a photograph can also have a dramatic effect on the interpretation of the transmitted message. The goal of this thesis is to allow users to manipulate visual content and create images that corresponds to their communication intent. We propose a number of manipulations that modify, simplify or stylize images in order to improve their expressive power. We first present two methods to remove details in photographs and videos. The resulting simplification enhances the relevant structures of an image. We then introduce a novel vector primitive, called Diffusion Curves, that facilitates the creation of smooth color gradients and blur in vector graphics. The images created with diffusion curves contain complex image features that are hard to obtain with existing vector primitives. In the second part of this manuscript we propose two algorithms for the creation of stylized animations from 3D scenes and videos. The two methods produce animations with the 2D appearance of traditional media such as watercolor. Finally, we describe an approach to decompose the illumination and reflectance components of a photograph. We make this ill-posed problem tractable by propagating sparse user indications. This decomposition allows users to modify lighting or material in the depicted scene. The various image manipulations proposed in this dissertation facilitates the creation of a variety of visual representations, as illustrated by our results.
  • Item
    Capturing and Reconstructing the Appearance of Complex 3D Scenes
    (Fuchs, Christian, 2009-05-29) Fuchs, Christian
    In this thesis, we present our research on new acquisition methods forreflectance properties of real-world objects. Specifically, we firstshow a method for acquiring spatially varying densities in volumes oftranslucent, gaseous material with just a single image. This makes themethod applicable to constantly changing phenomena like smoke withoutthe use of high-speed camera equipment.Furthermore, we investigated how two well known techniques --synthetic aperture confocal imaging and algorithmic descattering --can be combined to help looking through a translucent medium like fogor murky water. We show that the depth at which we can still see anobject embedded in the scattering medium is increased. In a relatedpublication, we show how polarization and descattering based onphase-shifting can be combined for efficient 3D~scanning oftranslucent objects. Normally, subsurface scattering hinders the rangeestimation by offsetting the peak intensity beneath the surface awayfrom the point of incidence. With our method, the subsurfacescattering is reduced to a minimum and therefore reliable 3D~scanningis made possible.Finally, we present a system which recovers surface geometry,reflectance properties of opaque objects, and prevailing lightingconditions at the time of image capture from just a small number ofinput photographs. While there exist previous approaches to recoverreflectance properties, our system is the first to work on imagestaken under almost arbitrary, changing lighting conditions. Thisenables us to use images we took from a community photo collectionwebsite.
  • Item
    Uses of uncalibrated images to enrich3D models information
    (Dellepiane, 2009) Dellepiane, Matteo
    La diminuzione dei costi delle fotocamere digitali semi-professionali ha portato allapossibilit per tutti di acquisire immagini ad alta definizione in modo molto semplice.Tuttavia, l interpretazione di queste immagini, nell ambito di tecniche di analisi dellascena di ricostruzione 3D della stessa, risulta ancora molto difficile a causa dellaricchezza di informazione acquisita. Nel caso in cui si conosca per una rappresentazione,anche semplificata, della scena, possibile estrarre dati interessanti in manieraautomatica o semi-automatica.Questi dati possono essere utilizzati in diversi modi per arricchire la qualit dei dati3D in possesso. Nell ambito di questa tesi, sono quindi presentate alcune tecnicheper l uso di immagini non registrate per l arricchimento di modelli 3D.In particolare, due possibili campi di applicazione sono considerati: l acquisizione,proiezione e visualizzazione dell informazione di colore e la modifica della geometriadi partenza.Per quanto riguarda la gestione del colore, sono presentate alcune soluzione praticheed efficaci, che hanno portato a importanti risultati nell ambito di svariati progettinell ambito in particolare dei Beni Culturali.Considerando invece le tecniche di modifica della geometria, sono presentati dueapprocci che apportano cambiamenti nella topologia di modelli 3D gi esistenti. Inparticolare, nella prima tecnica le informazioni estratte dalle immagini sono usateper produrre modelli tridimesionali di teste umane, usati per simulazioni di soundscattering nell ambito di applicazioni di 3D sound rendering. Il secondo metodo permetteinvece di completare modelli 3D con buchi, utilizzando immagini dell oggettoreale su cui sia stato proiettato un pattern laser predefinito.Infine, sono presentate alcune interessanti indicazioni a proposito di possibili sviluppifuturi dei metodi proposti, per delineare la direzione di questo promettente argomentodi ricerca. - The decrease in costs of semi-professional digital cameras has led to the possibilityfor everyone to acquire a very detailed description of a scene in a very short time.Unfortunately, the interpretation of the images is usually quite hard, due to the amountof data and the lack of robust and generic image analysis methods. Nevertheless, if ageometric description of the depicted scene is available, it gets much easier to extractinformation from 2D data.This information can be used to enrich the quality of the 3D data in several ways.In this thesis, several uses of sets of unregistered images for the enrichment of 3Dmodels are shown.In particular, two possible fields of application are presented: the color acquisition,projection and visualization and the geometry modification.Regarding color management, several practical and cheap solutions to overcome themain issues in this field are presented. Moreover, some real applications, mainly relatedto Cultural Heritage, show that provided methods are robust and effective.In the context of geometry modification, two approaches are presented to modify alreadyexisting 3D models. In the first one, information extracted from images is usedto deform a dummy model to obtain accurate 3D head models, used for simulationin the context of three-dimensional audio rendering. The second approach presentsa method to fill holes in 3D models, with the use of registered images depicting apattern projected on the real object.Finally, some useful indications about the possible future work in all the presentedfields are given, in order to delineate the developments of this promising direction ofresearch.
  • Item
    SELF-DELAUNAY MESHES FOR SURFACES
    (Dyer, 2010) Dyer, Ramsay
    In the Euclidean plane, a Delaunay triangulation can be characterized by the requirementthat the circumcircle of each triangle be empty of vertices of all other triangles. For triangulatinga surface S in R3, the Delaunay paradigm has typically been employed in the formof the restricted Delaunay triangulation, where the empty circumcircle property is definedby using the Euclidean metric in R3 to measure distances on the surface. More recently, theintrinsic (geodesic) metric of S has also been employed to define the Delaunay condition.In either case the resulting mesh M is known to approximate S with increasing accuracyas the density of the sample points increases. However, the use of the reference surface Sto define the Delaunay criterion is a serious limitation. In particular, in the absence of theoriginal reference surface, there is no way of verifying if a given mesh meets the criterion.We define a self-Delaunay mesh as a triangle mesh that is a Delaunay triangulation ofits vertex set with respect to the intrinsic metric of the mesh itself. This yields a discretesurface representation criterion that can be validated by the properties of the mesh alone,independent of any reference surface the mesh is supposed to represent. The intrinsic Delaunaytriangulation that characterizes self-Delaunay meshes makes them a natural domainfor discrete differential geometry, and the discrete exterior calculus in particular.We examine self-Delaunay meshes and their relationship with other Delaunay structuresfor surface representation. We study sampling conditions relevant to the intrinsic approach,and compare these with traditional sampling conditions which are based on extrinsic quantitiesand distances in the ambient Euclidean space. We also provide practical and provablycorrect algorithms for constructing self-Delaunay meshes. Of particular interest in thiscontext is the extrinsic edge flipping algorithm which extends the familiar algorithm forproducing planar Delaunay triangulations.
  • Item
    Filtering and Optimization Strategies for Markerless Human Motion Capture with Skeleton-Based Shape Models
    (Gall, Juergen, 2009-07-07) Gall, Juergen
    Since more than 2000 years, people have been interested in understanding and analyzing themovements of animals and humans which lead to the development of advanced computer systemsfor motion capture. Although marker-based systems for motion analysis are commerciallysuccessful, capturing the performance of a human or an animal from a multi-view video sequencewithout the need for markers is still a challenging task. The most popular methods formarkerless human motion capture are model-based approaches that rely on a surface model ofthe human with an underlying skeleton. In this context, markerless motion capture seeks for thepose, i.e., the position, orientation, and configuration of the human skeleton that is best explainedby the image data. In order to address this problem, we discuss the two questions:1. What are good cues for human motion capture? Typical cues for motion capture are silhouettes,edges, color, motion, and texture. In general, a multi-cue integration is necessary fortracking complex objects like humans since all these cues come along with inherent drawbacks.Besides the selection of the cues to be combined, reasonable information fusion is a commonchallenge in many computer vision tasks. Ideally, the impact of a cue should be large in situationswhen its extraction is reliable, and small, if the information is likely to be erroneous. Tothis end, we propose an adaptive weighting scheme that combines complementary cues, namelysilhouettes on one side and optical flow as well as local descriptors on the other side. Whereassilhouette extraction works best in case of homogeneous objects, optical flow computation andlocal descriptors perform better on sufficiently structured objects. Besides image-based cues, wealso propose a statistical prior on anatomical constraints that is independent of motion patterns.Relying only on image features that are tracked over time does not prevent the accumulationof small errors which results in a drift away from the target object. The error accumulationbecomes even more problematic in the case of multiple moving objects due to occlusions. Tosolve the drift problem for tracking, we propose an analysis-by-synthesis framework that usesreference images to correct the pose. It comprises an occlusion handling and is successfullyapplied to crash test video analysis.2. Is human motion capture a filtering or an optimization problem? Model-based human motioncapture can be regarded as a filtering or an optimization problem. While local optimizationoffers accurate estimates but often looses track due to local optima, particle filtering can recoverfrom errors at the expense of a poor accuracy due to overestimation of noise. In order to overcomethe drawbacks of local optimization, we introduce a novel global stochastic optimizationapproach for markerless human motion capturing that is derived from the mathematical theoryon interacting particle systems. We call the method interacting simulated annealing (ISA) sinceit is based on an interacting particle system that converges to the global optimum similar tosimulated annealing. It estimates the human pose without initial information, which is a challengingoptimization problem in a high dimensional space. Furthermore, we propose a trackingframework that is based on this optimization technique to achieve both the robustness of filteringstrategies and a remarkable accuracy.In order to benefit from optimization and filtering, we introduce a multi-layer framework thatcombines stochastic optimization, filtering, and local optimization. While the first layer relieson interacting simulated annealing, the second layer refines the estimates by filtering and localoptimization such that the accuracy is increased and ambiguities are resolved over time withoutimposing restrictions on the dynamics.In addition, we propose a system that recovers not only the movement of the skeleton, but alsothe possibly non-rigid temporal deformation of the 3D surface. While large scale deformationsor fast movements are captured by the skeleton pose and approximate surface skinning, truesmall scale deformations or non-rigid garment motion are captured by fitting the surface to thesilhouette. In order to make automatic processing of large data sets feasible, the skeleton-basedpose estimation is split into a local one and a lower dimensional global one by exploiting thetree structure of the skeleton.Our experiments comprise a large variety of sequences for qualitative and quantitative evaluationof the proposed methods, including a comparison of global stochastic optimization withseveral other optimization and particle filtering approaches.
  • Item
    Techniques for Stochastic ImplicitSurface Modelling and Rendering
    (Gamito, Sept 2009) Gamito, Manuel Noronha
    Implicit surfaces are a powerful shape primitive for computer graphics. This thesis focuseson a shape modelling approach which generates synthetic shapes by the specification of animplicit surface generating function that has random properties. This type of graphic objectcan be called a stochastic implicit surface because the surface is perceived as the realisationof a stochastic process. The main contributions of this thesis are in the form of new andimproved modelling and rendering algorithms to deal with stochastic implicit surfaces that canbe complex and feature fractal scaling properties.On the modelling side, a new topological correction algorithm is proposed to detect disconnectedsurface parts that arise as a consequence of the implicit surface representation. A surfacedeformation algorithm, based on advection through a vector field, is also presented.On the rendering side, several algorithms are proposed. First, an improved ray casting methodis presented that guarantees correct intersections between view rays and the surface. Second, anew progressive refinement rendering algorithm is proposed that provides a dynamic renderingenvironment where the image quality steadily increases with time. Third, a distributed renderingmechanism is presented to deal with the long computation times involved in the imagesynthesis of stochastic implicit surfaces.An application of the proposed techniques is given in the context of the procedural modellingof a planet. A procedural planet model must be able to generate synthetic planets showing thecorrect range of geological scales. The planet is generated as a stochastic implicit surface. Thisrepresents an improvement over previous models that generated planets as displacement mapsover a sphere. Terrain features that were previously difficult to model can be achieved throughthe implicit surface approach. This approach is a generalisation over those previous modelssince displacement maps over the sphere can be recast as implicit surfaces.
  • Item
    Analysis and Visualization of Industrial CT Data
    (Heinzl, Dec 2008) Heinzl, Christoph
    Die industrielle 3D Röntgencomputertomographie (3DCT) steht derzeitan der Schwelle von einer zerstörungsfreien Werkstoffprüfmethode hin zueiner genormten Methode für dimensionales Messen. 3DCT wird vor allemim Bereich der Erstmusterprüfung von neuen Komponenten eingesetzt,um die Nachteile und Einschränkungen bisheriger Methoden zu überwinden.Eine steigende Anzahl von Firmen vertraut daher auf 3DCT undsporadisch wird 3DCT bereits von einigen Pionieren für die Qualitätskontrollein der Produktion eingesetzt. Dennoch ist die 3DCT eine sehr jungeMethode mit einigen Nachteilen, die großen Einfluss auf das Messergebnishaben. Einige der größten Nachteile von 3DCT im Bereich der Qualitätssicherungsind:Artefakte ändern die Grauwerte im Datensatz und generieren künstlicheStrukturen, die in Realität nicht vorhanden sind.Diskretisierung bewirkt Unregelmäßigkeiten in den Grauwertenentsprechend des Abtasttheorems von Nyquist-Shannon.Informationen bezüglich Unsicherheit der Daten gehen bei der Extraktionvon dimensionalen Messmerkmalen verloren.Spezifikationen and Einschränkungen der einzelnen Komponenten undder Bauweise des 3DCTs limitieren die erreichbare Messgenauigkeit.Diese Dissertation trägt zum Stand der Technik durch algorithmische Lösungenvon typischen industriellen Problemen im Bereich der Metrologiemittels 3DCT bei. Das Hauptaugenmerk der präsentierten Arbeit liegtin der Entwicklung und Implementierung von neuen Prozessketten, diefür den täglichen industriellen Einsatz im Bereich der Qualitätssicherungoptimiert sind. Geeignete, einfach verständliche Visualisierungsmethodenwerden evaluiert und angewendet, um einen Einblick in die generiertenMessdaten zu ermöglichen. Im Speziellen werden drei Prozesskettenpräsentiert, die einige der wesentlichen Aspekte der Metrologie mittels3DCT abdecken. Die betrachteten Aspekte sind robuste Oberflächeextraktion,Artefaktreduzierung mittels Dual Energy CT, lokale Oberflächeextraktionvon Multimaterialkomponenten und statistische Analyse vonMultimaterialkomponenten. Die generierten Ergebnisse jeder Prozesskettewerden anhand von Testteilen und typischen Industriebauteilen demonstriertund verifiziert - Industrial X-Ray 3D computed tomography (3DCT) is on the edge ofadvancing from a non destructive testing method to a fully standardizedmeans of dimensional measurement for every day industrial use. Currently3DCT has drawn attention especially in the area of first part inspections ofnew components, mainly in order to overcome limitations and drawbacksof common methods. Yet an increasing number of companies is benefittingfrom industrial 3DCT and sporadically the first pioneers start using industrial3DCT for quality control in the production phase of a component. As3DCT is still a very young technology of industrial quality control, thismethod also faces severe problems, which seriously affect measurementresults. Some of the major drawbacks for quality control are the following:Artefacts modify the spatial greyvalues, generating artificial structures inthe datasets, which do not correspond to reality.Discrete sampling introduces further irregularities due to the Nyquist-Shannon sampling theorem.Uncertainty information is missing when extracting dimensional measurementfeatures.Specifications and limitations of the components and the special setup a3DCT constrain the best achievable measurement precision.This thesis contributes to the state of the art by algorithmic evaluationof typical industrial tasks in the area of dimensional measurement using3DCT. The main focus lies in the development and implementation of novelpipelines for everyday industrial use including comparisons to commonmethods. Convenient and easy to understand means of visualization are evaluated and used to provide insight into the generated results. In particularthree pipelines are introduced, which cover some of the major aspectsconcerning metrology using industrial 3DCT. The considered aspectsare robust surface extraction, artefact reduction via dual energy CT, localsurface extraction of multi-material components, and statistical analysisof multi-material components. The generated results of each pipeline aredemonstrated and verified using test specimens as well as real world components.
  • Item
    Reconstruction and Analysis of Shapes from 3D Scans
    (Haar, 2009) Haar, Frank B. ter
    In this thesis we use 3D laser range scans for the acquisition, reconstruction, and analysis of 3D shapes. 3D laser range scanning has proven to be a fast and effective way to capture the surface of an object in a computer. Thousands of depth measurements represent a part of the surface geometry as a cloud of 3D points and geometric algorithms have been developed to turn such 3D point sets into manageable shapes and shape representations for end users or other algorithms. We use 3D laser range scans to evaluate acquisition and reconstruction systems and algorithms, to fully automate the object reconstruction, to find discriminative face features, for automatic landmark extraction, for face identification with and without expressions, and for the statistical modeling of faces.</p>
  • Item
    LiveSync: Smart Linking of 2D and 3D Views in Medical Applications
    (Kohlmann, Dec 2008) Kohlmann, Peter
    In dieser Dissertation werden zwei Techniken vorgestellt, welche 2D und 3DAnsichten in medizinischen Anwendungen geschickt miteinander verknüpfen.Obwohl interaktive 3D Volumenvisualisierung selbst für sehr große Datensätzeverfügbar ist, wird diese recht wenig in der klinischen Praxis verwendet. Dergrößte Hinderungsgrund für eine bessere Integration in den klinischen Arbeitsablaufist der hohe Zeitaufwand, um die Parameter für diagnostisch relevanteBilder einzustellen. Der Arzt muss sich um das Einstellen eines geeigneten Blickpunktes,des Zooms, einer Transferfunktion, von Schnittebenen und anderer Parameterkümmern. Deshalb werden in aktuellen Anwendungen hauptsächlich 2DAnsichten verwendet, welche durch Standardverfahren wie Multiplanare Reformation(MPR) erzeugt werden.Das LiveSync Interaktionsmetapher ist ein neuartiges Konzept zur Synchronisierungvon 2D Schichtbildern und 3D Volumenansichten auf medizinischeDatensätze. Die relevanten anatomischen Strukturen werden vom Benutzer durchintuitives Auswählen auf dem Schichtbild definiert. Die 3D Volumenansichtwird automatisch aktualisiert, um dem Benutzer ein diagnostisch relevantes Bildanzubieten. Um diese direkte Synchronisierung zu erreichen, wird eine minimaleMenge abgeleiteter Information verwendet. Hierbei werden keine vorsegmentiertenDatensätze oder datenspezifische Vorberechnungen benötigt. Dasvorgestellte System liefert dem Arzt synchronisierte Ansichten, welche dabeihelfen können mit minimaler Benutzerinteraktion einen besseren Einblick in diemedizinischen Daten zu bekommen.Contextual Picking ist eine neuartigeMethode, um relevante Positionen in volumetrischenDaten abhängig von ihrem Kontext interaktiv zu bestimmen. Erreichtwird dies durch das Auswählen eines Punktes in einem Bild, welches mittelsdirektem Volumenrendering erzeugt wurde. In der klinischen Diagnostikbefinden sich die relevanten Positionen häufig im Zentrum anatomischer Strukturen.Um diese 3D Positionen, welche eine komfortable Untersuchung dergewünschten Struktur ermöglichen, abzuleiten, extrahiert das System kontextabhängigeMetainformation aus den DICOM (Digital Imaging and Communicationsin Medicine) Bildern und der Konfiguration der medizinischen Arbeitsstation.Entlang eines Sichtstrahls für eine volumetrische Auswahl wird das Strahlenprofilanalysiert, um Strukturen zu ermitteln, welche Ähnlichkeiten zu vordefiniertenVorlagen in einer Wissensdatenbank aufweisen. Es wird demonstriert, dass einezurückgelieferte 3D Position dazu verwendet werden kann, eine Struktur in 2DAnsichten hervorzuheben. Desweiteren können angenäherte Zentrallinien röhrenförmigerObjekte interaktiv berechnet oder Beschriftungen kontextabhängigen 3DPositionen zugewiesen werden. - In this thesis two techniques for the smart linking of 2D and 3D views inmedical applications are presented. Although real-time interactive 3D volume visualizationis available even for very large data sets, it is used quite rarely in theclinical practice. A major obstacle for a better integration in the clinical workflowis the time-consuming process to adjust the parameters to generate diagnosticallyrelevant images. The clinician has to take care of the appropriate viewpoint, zooming,transfer function setup, clipping planes, and other parameters. Because ofthis, current applications primarily employ 2D views generated through standardtechniques such as multi-planar reformatting (MPR).The LiveSync interaction metaphor is a new concept to synchronize 2D sliceviews and 3D volumetric views of medical data sets. Through intuitive picking actionson the slice, the users define the anatomical structures they are interested in.The 3D volumetric view is updated automatically with the goal that the users areprovided with diagnostically relevant images. To achieve this live synchronizationa minimal set of derived information, without the need for segmented data sets ordata-specific precomputations, is used. The presented system provides the physicianwith synchronized views which help to gain deeper insight into the medicaldata with minimal user interaction.Contextual picking is a novel method for the interactive identification of contextualinterest points within volumetric data by picking on a direct volume renderedimage. In clinical diagnostics the points of interest are often located in thecenter of anatomical structures. In order to derive the volumetric position, whichallows a convenient examination of the intended structure, the system automaticallyextracts contextual meta information from the DICOM (Digital Imaging andCommunications in Medicine) images and the setup of the medical workstation.Along a viewing ray for a volumetric picking, the ray profile is analyzed to detectstructures which are similar to predefined templates from a knowledge base.It is demonstrated that the obtained position in 3D can be utilized to highlight astructure in 2D slice views, to interactively calculate approximate centerlines oftubular objects, or to place labels at contextually-defined 3D positions.
  • Item
    Feature Centric Volume Visualization
    (Malik, 11.12.2009) Malik, Muhammad Muddassir
    This thesis presents techniques and algorithms for the effective exploration of volumetric datasets. The Visualization techniques are designed to focus on user specified features of interest. The proposed techniques are grouped into four chapters namely feature peeling, computation and visualization of fabrication artifacts, locally adaptive marching cubes, and comparative visualization for parameter studies of dataset series. The presented methods enable the user to efficiently explore the volumetric dataset for features of interest.Feature peeling is a novel rendering algorithm that analyzes ray profiles along lines of sight. The profiles are subdivided according to encountered peaks and valleys at so called transition points. The sensitivity of these transition points is calibrated via two thresholds. The slope threshold is based on the magnitude of a peak following a valley, while the peeling threshold measures the depth of the transition point relative to the neighboring rays. This technique separates the dataset into a number of feature layers.Fabrication artifacts are of prime importance for quality control engineers for first part inspection of industrial components. Techniques are presented in this thesis to measure fabrication artifacts through direct comparison of a reference CAD model with the corresponding industrial 3D X-ray computed tomography volume. Information from the CAD model is used to locate corresponding points in the volume data. Then various comparison metrics are computed to measure differences (fabrication artifacts) between the CAD model and the volumetric dataset. The comparison metrics are classified as either geometry-driven comparison techniques or visual-driven comparison techniques.The locally adaptive marching cubes algorithm is a modification of the marching cubes algorithm where instead of a global iso-value, each grid point has its own iso-value. This defines an iso-value field, which modifies the case identification process in the algorithm. An iso-value field enables the algorithm to correct biases within the dataset like low frequency noise, contrast drifts, local density variations, and other artifacts introduced by the measurement process. It can also be used for blending between different iso-surfaces (e.g., skin, and bone in a medical dataset).Comparative visualization techniques are proposed to carry out parameter studies for the special application area of dimensional measurement using industrial 3D X-ray computed tomography. A dataset series is generated by scanning a specimen multiple times by varying parameters of the scanning device. A high resolution series is explored using a planar reformatting based visualization system. A multi-image view and an edge explorer are proposed for comparing and visualizing gray values and edges of several datasets simultaneously. For fast data retrieval and convenient usability the datasets are bricked and efficient data structures are used.
  • Item
    Comprehensive Visualization of Cardiac MRI Data
    (Termeer, Dec 2008) Termeer, Maurice Alain
    Koronare Herzkrankheit ist eine der führenden Todesursachen in derwestlichen Welt. Die kontinuierliche Verbesserung der Magnetresonanztomographie(MRT) erleichtert genauere Diagnosen, indem sie immerdetailliertere Informationen über die Lebensfähigkeit, das Funktionieren,die Durchblutung und die Anatomie des Herzens eines Patienten liefert.Diese zunehmende Menge an Informationen schafft die Notwendigkeit füreffizientere und effektivere Mittel der Verarbeitung dieser Daten.Diese Dissertation präsentiert mehrere neue Techniken, die eine umfassendereVisualisierung des Patienten bei der Diagnose von Erkrankungender Herzkranzgefäße mittels MRT unterstützen. Das volumetrische Polardiagramwird als Erweiterung des Polardiagrams, welches eine bestehendeVisualusierungstechnik in der klinischen Praxis ist, eingeführt. Diesesneuartige Konzept bietet eine umfassendere Sicht auf die Lebensfähigkeitdes Herzens eines Patienten, indem detaillierte Informationen über dieTransmuralität der Narbe ohne Diskontinuitäten bereitgestellt werden.Anatomische Zusammenhänge gehen in abstrakten Darstellungen vonDaten häufig verloren. Darüberhinaus liefern einige Arten von Scansrelativ wenig anatomischen Kontext. Mehrere Techniken zur Wiederherstellungdes anatomischen Bezugs werden vorgestellt. Die primärenKoronararterien sind in einem Scan des ganzen Herzens segmentiert undwerden auf ein volumetrisches Polardiagram abgebildet. Hierbei wirdder abstrakten Repräsentation anatomischer Kontext hinzugefügt. Ebenso,werden segmentierte späte Anreicherungs Daten zusammen mit einerdrei-dimensionalen Segmentierung des patientenspezifischen Herzmuskelsund koronarer Anatomie dargestellt. Darüberhinaus werden koronareVersorgungsgebiete aus den patientenspezifischen Daten berechnet. Diesbedeutet eine Verbesserung gegenüber Modellen welche auf Bevölkerungsdurchschnittenbasieren.Informationen über die Durchblutung des Herzmuskels welche ausMRT-Aufnahmen abgeleitet werden können sind in der Regel von relativgeringer Auflösung. Unter Verwendung hochauflösender anatomischenDaten wird ein Konzept für die Visualisierung simulierter Durchblutungdes Herzmuskels präsentiert. Dabei wird die detaillierte Information überdie Durchblutung genutzt. Schließlich, wird eine wirklich umfassendeVisualisierung einer Herz-MRT-Untersuchung erforscht. Dabei werden Scans des ganzen Herzens, der Herzvitalität, der Herzfunktion und derDurchblutung in einer einzigen Visualisierung kombiniert. Die eingeführtenKonzepte fördern den Aufbau eines umfassenderen Überblicksüber den Patienten. Die dabei zusätzlich gewonnene Information kann fürden Diagnoseprozess von Nutzem sein. - Coronary artery disease is one of the leading causes of death in the westernworld. The continuous improvements in magnetic resonance imagingtechnology facilitate more accurate diagnoses by providing increasinglymore detailed information on the viability, functioning, perfusion, andanatomy of a patient s heart. This increasing amount of information createsthe need for more efficient and more effective means of processing thesedata.This thesis presents several novel techniques that facilitate a morecomprehensive visualization of a patient s heart to assist in the diagnosisof coronary artery disease using magnetic resonance imaging (MRI). Thevolumetric bull s eye plot is introduced as an extension of an existingvisualization technique used in clinical practice the bull s eye plot. Thisnovel concept offers a more comprehensive view on the viability of apatient s heart by providing detailed information on the transmurality ofscar while not suffering from discontinuities.Anatomical context is often lost due to abstract representations of data,or may be scarce due to the nature of the scanning protocol. Severaltechniques to restore the relation to anatomy are presented. The primarycoronary arteries are segmented in a whole heart scan and mapped ontoa volumetric bull s eye plot, adding anatomical context to an abstractrepresentation. Similarly, segmented late enhancement data are renderedalong with a three-dimensional segmentation of the patient-specific myocardialand coronary anatomy. Additionally, coronary supply territoriesare computed from patient-specific data as an improvement over modelsbased on population averages.Information on the perfusion of the myocardium provided by MRI istypically of fairly low resolution. Using high-resolution anatomical data,an approach to visualize simulated myocardial perfusion is presented,taking full advantage of the detailed information on perfusion. Finally, atruly comprehensive visualization of a cardiac MRI exam is explored bycombining whole heart, late enhancement, functional, and perfusion scansin a single visualization. The concepts introduced help to build a morecomprehensive view of the patient and the additional information mayprove to be beneficial for the diagnostic process.
  • Item
    High Quality Dynamic Reflectance and Surface Reconstruction from Video
    (Ahmed, Naveed, 2009-07-10) Ahmed, Naveed
    The creation of high quality animations of real-world human actors has long been a challenging problem in computer graphics. It involves the modeling of the shape of the virtual actors, creating their motion, and the reproduction of very fine dynamic details. In order to render the actor under arbitrary lighting, it is required that reflectance properties are modeled for each point on the surface. These steps, that are usually performed manually by professional modelers, are time consuming and cumbersome.<br> In this thesis, we show that algorithmic solutions for some of the problems that arise in the creation of high quality animation of real-world people are possible using multi-view video data. First, we present a novel spatio-temporal approach to create a personalized avatar from multi-view video data of a moving person. Thereafter, we propose two enhancements to a method that captures human shape, motion and reflectance properties of a moving human using eight multi-view video streams. Afterwards we extend this work, and in order to add very fine dynamic details to the geometric models, such as wrinkles and folds in the clothing, we make use of the multi-view video recordings and present a statistical method that can passively capture the fine-grain details of time-varying scene geometry. Finally, in order to reconstruct structured shape and animation of the subject from video, we present a dense 3D correspondence finding method that enables spatiotemporally coherent reconstruction of surface animations directly from multi-view video data.<br>These algorithmic solutions can be combined to constitute a complete animation pipeline for acquisition, reconstruction and rendering of high quality virtual actors from multi-view video data. They can also be used individually in a system that requires the solution of a specific algorithmic sub-problem. The results demonstrate that using multi-view video data it is possible to find the model description that enables realistic appearance of animated virtual actors under different lighting conditions and exhibits high quality dynamic details in the geometry.
  • Item
    Expressive Visualization and Rapid Interpretation of Seismic Volumes
    (Patel, 2009-08-01) Patel, Daniel
    One of the most important resources in the world today is energy. Oil and gas provide two thirds of the world energy consumption, making the world completely dependent on it. Locating and recovering the remaining oil and gas reserves will be of high focus in society until competitive energy sources are found. The search for hydrocarbons is broadly speaking the topic of this thesis. Seismic measurements of the subsurface are collected to discover oil and gas trapped in the ground. Identifying oil and gas in the seismic measurements requires visualization and interpretation. Visualization is needed to present the data for further analysis. Interpretation is performed to identify important structures. Visualization is again required for presenting these structures to the user. This thesis investigates how computer assistance in producing high-quality visualizations and in interpretation can result in expressive visualization and rapid interpretation of seismic volumes. Expressive visualizations represent the seismic data in an easily digestible, intuitive and pedagogic form. This enables rapid interpretation which accelerates the nding of important structures. 3
  • Item
    Semantic Visualization Mapping for Volume Illustration
    (Rautek, Dec 2008) Rautek, Peter
    DasGebiet derwissenschaftlichenVisualisierung beschäftigt sichmit der automatisiertenGenerierung von Bildern aus wissenschaftlichen Daten. Um relevanteInformationen darzustellen, werden adäquate visuelle Abstraktionen benötigt. VisuelleAbstraktionenstellen gewöhnlich einen Kompromiss zwischen demexaktenDarstellen von Information und dem Verhindern einer visuellen überlastung dar.Um visuelle Abstraktionen einsetzen zu können, wird eine Abbildung zwischenDatenattributen und visuellen Abstraktionen benötigt, die Visualisierungsabbildunggenannt wird.Diese Dissertation gibt einen überblick über die Geschichte der visuellen Abstraktionund der Visualisierungsabbildung im Kontext der wissenschaftlichen Visualisierung.Danach wird eine neue visuelle Abstraktionsmethode - die KarikaturistischeVisualisierung - vorgestellt.Das Prinzip derübertreibung ist die in diesemZusammenhang verwendete visuelleAbstraktionsmethode.Dieses Prinzip der Karikaturakzentuiert die markanten Details, während der Kontext nur schematischdargestellt wird.Die Abstraktionsmethoden, die in dieser Dissertation verwendet werden, sindvon der visuellenKunst insbesondere der traditionellenwissenschaftlichen Illustrationinspiriert. Illustrationen sind guteBeispiele für handgezeichneteVisualisierungen.Allerdings ist diemanuelleAnfertigung von Illustrationen sehr zeitaufwändigund verlangt umfangreiches künstlerisches Können. Um diese Techniken zu automatisieren,werden Algorithmen entwickelt, die einige Parameter zur Verfügungstellen, welche vom Benutzer eingestellt werden können. Im Rahmen dieser Dissertationist eine Methode entstanden, die es ermöglicht, Semantiken explizit zuverwenden, um Abbildungen von Datenattributen auf visuelle Abstraktionen zuspezifizieren. Visualisierungsregeln können mittels semantischer VisualisierungsabbildungunterVerwendug vonDomäne- undVisualisierungssemantik spezifiziertwerden.Das Verhalten der automatisch generierten interaktiven Illustrationen wirddurch interaktionsabhängige Visualisierungsregeln festgelegt. Während Interaktionsmöglichkeitenwie die Manipulation der Blickrichtung den Standard in derVolumenvisualisierung darstellen, werden in dieser Dissertation umfangreichereInteraktionsmöglichkeiten vorgestellt. Das Verhalten der interaktiven Illustrationenwird von interaktionsabhängigen Regeln bestimmt, die in den semantischenVisualisierungsabbildungsansatz integriert werden. - Scientific visualization is the discipline of automatically rendering images fromscientific data. Adequate visual abstractions are important to show relevant informationin the data. Visual abstractions are a trade-off between showing detailedinformation and preventing visual overload. To use visual abstractions for thedepiction of data, a mapping from data attributes to visual abstractions is needed.This mapping is called the visualization mapping.This thesis reviews the history of visual abstractions and visualizationmappingin the context of scientific visualization. Later a novel visual abstraction methodcalled caricaturistic visualization is presented. The concept of exaggeration is thevisual abstraction used for caricaturistic visualization. Principles from traditionalcaricatures are used to accentuate salient details of data while sparsely sketchingthe context.The visual abstractions described in this thesis are inspired by visual art andmostly by traditional illustration techniques. To make effective use of the recentlydeveloped visualizationmethods, that imitate illustration techniques, an expressivevisualization mapping approach is required. In this thesis a visualization mappingmethod is investigated that makes explicit use of semantics to describe mappingsfrom data attributes to visual abstractions. The semantic visualization mappingexplicitly uses domain semantics and visual abstraction semantics to specify visualizationrules. Illustrative visualization results are shown that are achieved withthe semantic visualization mapping.The behavior of the automatically rendered interactive illustrations is specifiedusing interaction-dependent visualization rules. Interactions like the change of theviewpoint, or the manipulation of a slicing plane are state of the art in volumevisualization. In this thesis a method for more elaborate interaction techniques ispresented. The behavior of the illustrations is specified with interaction-dependentrules that are integrated in the semantic visualization mapping approach.
  • Item
    Perceptually-motivated, Interactive Rendering and Editing of Global Illumination
    (Universitaet des Saarlands, 22.12.2009) Ritschel, Tobias
    This thesis proposes several new perceptually-motivated techniques to synthesize, edit and enhance depiction of three-dimensional virtual scenes. Finding algorithmsthat fit the perceptually economic middle ground between artistic depiction and full physical simulation is the challenge taken in this work. First, we will present threeinteractive global illumination rendering approaches that are inspired by perception to efficiently depict important light transport. Those methods have in commonto compute global illumination in large and fully dynamic scenes allowing for light, geometry, and material changes at interactive or real-time rates. Further,this thesis proposes a tool to edit reflections, that allows to bend physical laws to match artistic goals by exploiting perception. Finally, this work contributes apost-processing operator that depicts high contrast scenes in the same way as artists do, by simulating it
  • Item
    Robust and Efficient Processing Techniques for Staticand Dynamic Geometric Data
    (Schall, Oliver, 2009-08-28) Schall, Oliver
    Generating high quality geometric representations from real-world objects is a fundamental problem in computer graphics which is motivated bymanifold applications. They comprise image synthesis for movie productionor computer games but also industrial applications such as quality assurance in mechanical engineering, the preservation of cultural heritage and the medical adaptation of prostheses or orthoses. Common demands of these applications on their underlying algorithms are robustness and efficiency. In addition, technological improvements of scanning devices and cameras which allow for the acquisition of new data types such as dynamic geometric data, create novel requirements which rise new challenges for processing algorithms.This dissertation focuses on these aspects and presents differentcontributions for flexible, efficient and robust processing of staticand time-varying geometric data. Two techniques focus on the problemof denoising. A statistical filtering algorithm for point cloud databuilding on non-parametric density estimation is introduced as well asa neighborhood filter for static and time-varying range data which isbased on a novel non-local similarity measure. The third contributionunifies partition of unity decomposition and a global surfacereconstruction algorithm based on the Fast Fourier Transformwhich results in a novel, robust and efficient reconstructiontechnique. Concluding, two flexible and versatile tools for designingscalar fields on meshes are presented which are useful to facilitate acontrollable quadrangular remeshing.
  • Item
    Feature Extraction for Visual Analysis of DW-MRI Data
    (Schultz, Thomas, 2009-06-18) Schultz, Thomas
    Diffusion Weighted Magnetic Resonance Imaging (DW-MRI) is a recentmodality to investigate the major neuronal pathways of the humanbrain. However, the rich DW-MRI datasets cannot be interpreted withoutproper preprocessing. In order to achieve understandablevisualizations, this dissertation reduces the complex data to relevantfeatures.<br>The first part is inspired by topological features in flow data. Novelfeatures reconstruct fuzzy fiber bundle geometry from probabilistictractography results. The topological properties of existing featuresthat extract the skeleton of white matter tracts are clarified, andthe core of regions with planar diffusion is visualized.<br>The second part builds on methods from computer vision. Relevantboundaries in the data are identified via regularized eigenvaluederivatives, and boundary information is used to segment anisotropyisosurfaces into meaningful regions. A higher-order structure tensoris shown to be an accurate descriptor of local structure in diffusiondata.<br>The third part is concerned with fiber tracking. Streamlinevisualizations are improved by adding features from structural MRI ina way that emphasizes the relation between the two types of data, andthe accuracy of streamlines in high angular resolution data isincreased by modeling the estimation of crossing fiber bundles as alow-rank tensor approximation problem.
  • Item
    Applications of temporal coherence in real-time rendering
    (Scherzer, 27.11.2009) Scherzer, Daniel
    Real-time rendering imposes the challenging task of creating a new rendering of an input scene at least 60 times a second. Although computer graphics hardware has made staggering advances in terms of speed and freedom of programmability, there still exist a number of algorithms that are too expensive to be calculated in this time budget, like exact shadows or an exact global illumination solution. One way to circumvent this hard time limit is to capitalize on temporal coherence to formulate algorithms incremental in time. The main thesis of this work is that temporal coherence is a characteristic of real-time graphics that can be used to redesign well-known rendering methods to become faster, while exhibiting better visual fidelity. To this end we present our adaptations of algorithms from the fields of exact hard shadows, physically correct soft shadows and fast discrete LOD blending, in which we have successfully incorporated temporal coherence. Additionally, we provide a detailed context of previous work not only in the field of temporal coherence, but also in the respective fields of the presented algorithms.
  • Item
    Template based shape processing
    (Stoll, Carsten, 2009-09-30) Stoll, Carsten
    As computers can only represent and process discrete data, informationgathered from the real world always has to be sampled. While it isnowadays possible to sample many signals accurately and thus generatehigh-quality reconstructions (for example of images and audio data),accurately and densely sampling 3D geometry is still a challenge. Thesignal samples may be corrupted by noise and outliers, and contain largeholes due to occlusions. These issues become even more pronounced whenalso considering the temporal domain. Because of this, developing methodsfor accurate reconstruction of shapes from a sparse set of discrete datais an important aspect of the computer graphics processing pipeline. In this thesis we propose novel approaches to including semantic knowledgeinto reconstruction processes using template based shape processing. Weformulate shape reconstruction as a deformable template fitting process,where we try to fit a given template model to the sampled data. Thisapproach allows us to present novel solutions to several fundamentalproblems in the area of shape reconstruction. We address static problemslike constrained texture mapping and semantically meaningful hole-fillingin surface reconstruction from 3D scans, temporal problems such as meshbased performance capture, and finally dynamic problems like theestimation of physically based material parameters of animated templates.
  • Item
    On Visualization and Reconstruction from Non-uniform Point Sets
    (Vuçini, 27.11.2009) Vuçini, Erald
    Technological and research advances in both acquisition and simulation devices provide continuously increasing high-resolution volumetric data that by far exceed today's graphical and display capabilities. Non-uniform representations offer a way of balancing this deluge of data by adaptively measuring (sampling) according to the importance (variance) of the data. Also, in many real-life situations the data are known only on a non-uniform representation. Processing of non-uniform data is a non-trivial task and hence more difficult when compared to processing of regular data. Transforming from non-uniform to uniform representations is a well-accepted paradigm in the signal processing community. In this thesis we advocate such a concept. The main motivation for adopting this paradigm is that most of the techniques and methods related to signal processing, data mining and data exploration are well-defined and stable for Cartesian data, but generally are non-trivial to apply to non-uniform data. Among other things, this will allow us to better exploit the capabilities of modern GPUs.In non-uniform representations sampling rates can vary drastically even by several orders of magnitude, making the decision on a target resolution a non-trivial trade-off between accuracy and efficiency. In several cases the points are spread non-uniformly with similar density across the volume, while in other cases the points have an enormous variance in distribution. In this thesis we present solutions to both cases. For the first case we suggest computing reconstructions of the same volume in different resolutions based on the level of detail we are interested in. The second case scenario is the main motivation for proposing a multi-resolution scheme, where the scale of reconstruction is decided adaptively based on the number of points in each subregion of the whole volume.We introduce a novel framework for 3D reconstruction and visualization from non-uniform scalar and vector data. We adopt a variational reconstruction approach. In this method non-uniform point sets are transformed to a uniform representation consisting of B-spline coefficients that are attached to the grid. With these coefficients we can define a C2 continuous function across the whole volume. Several testings were performed in order to analyze and fine-tune our framework. All the testings and the results of this thesis offer a view from a new and different perspective to the visualization and reconstruction from non-uniform point sets.
  • Item
    Anatomical Modeling for Image Analysis in Cardiology
    (Zambal, Mar 2009) Zambal, Sebastian
    Eine der häufigsten Todesursachen in der westlichen Welt sind kardiovaskuläre Krankheiten. Für die Diagnose dieser Krankheiten erönen modernebildgebende Verfahren beeindruckende Möglichkeiten. Speziell in der Kardiologiehat die Entwicklung von Computertomographie (CT) und Magnetresonanztomographie(MRT) Scannern mit hoher zeitlicher Auösung dieAufnahme des schlagenden Herzens ermöglicht. Um die großen Datenmengen,die in der täglichen klinischen Routine akquiriert werden, zu analysierenund eine optimale Diagnose zu erstellen, wird intelligente Software benötigt.Diese Arbeit befasst sich mit modellbasierten Methoden für die automatischeExtraktion von klinisch relevanten Eigenschaften von medizinischenBildern in der kardialen Bildgebung. Typische Eigenschaften sind etwa dasSchlagvolumen des Herzens (engl. stroke volume, SV) oder die Masse desHerzmuskels.Im Vergleich zu anderen Algorithmen für die Segmentierung und Bildverarbeitunghaben die untersuchten modellbasierten Ansätze den Vorteil,dass vorhandenes Wissen in den Segmentierungsprozess eingebunden wirdund damit die Robustheit erhöht wird. In dieser Arbeit werden Modellebetrachtet, welche aus zwei essentiellen Teilen bestehen: Form undTextur. Form wird modelliert, um die geometrischen Eigenschaften deranalysierten anatomischen Strukturen einzuschränken. Textur wird verwendetum Grauwerte zu modellieren und spielt eine wichtige Rolle bei derAnpassung des Formmodells an ein neues Bild.Automatische Initialisierung von modellbasierter Segmentierung ist fürviele Anwendungen interessant. Für kardiale MR Bilder wird in dieser Arbeiteine Folge von Bildverarbeitungsschritten vorgeschlagen, um eine initialePlazierung des Modells zu berechnen.Ein spezielles Modell für die Segmentierung von funktionalen kardialenMR Studien, welches aus zwei Komponenten besteht, wird erläutert. DiesesModell kombiniert einzelne 2D Active Appearance Models mit einem statistischen3D Formmodell.Ein Ansatz zur eektiven Texturmodellierung wird vorgestellt. Eineinformationstheoretische Zielfunktion wird für optimierte probabilistischeTexturrepräsentation vorgeschlagen.Modellbasierte Extraktion von Koronararterien wird am Ende der Arbeit diskutiert. Die Resultate dieser Methode wurden auf einem Workshop aufder internationalen MICCAI Konferenz validiert. In einem direkten Vergleichschnitt diese Methode besser ab, als vier andere Ansätze. - The main cause of death in the western world is cardiovascular disease. Toperform eective diagnosis of this kind of disease, modern medical imagingmodalities offer great possibilities. In cardiology the advent of computedtomography (CT) and magnetic resonance (MR) scanners with high temporalresolution have made imaging of the beating heart possible. Largeamounts of data are aquired in everyday clinical practice. Intelligent softwareis required to optimally analyze the data and perform reliable andeective diagnosis.This thesis focusses on model-based approaches for automatic segmentationand extraction of clinically relevant properties from medical imagesin cardiology. Typical properties which are of interest are the volume ofblood that is ejected per cardiac cycle (stroke volume, SV) or the mass ofthe heart muscle (myocardial mass).Compared to other segmentation and image processing algorithms, theinvestigated model-based approaches have the advantage that they exploitprior knowledge. This increases robustness. Throughout this thesis modelsare discussed which consist of two important parts: shape and texture.Shape is modeled in order to restrict the geometric properties of the investigatedanatomical structures. Texture on the other hand is used to describegray values and plays an important role in matching the model to new unseenimages.Automatic initialization of model-based segmentation is important formany applications. For cardiac MR images this thesis proposes a sequenceof image processing steps which calculate an initial placement of a model.A special two-component model for segmentation of functional cardiacMR studies is presented. This model combines individual 2D Active AppearanceModels with a 3D statistical shape model.An approach to eective texture modeling is introduced. An informationtheoretic objective function is proposed for optimized probabilistic texturerepresentation.Finally a model-based coronary artery centerline extraction algorithmis presented. The results of this method were validated at a workshop atthe international MICCAI conference. In a direct comparison the methodoutperformed four other automatic centerline extraction algorithms.