Graphics Dissertation Online
Permanent URI for this community
For information about PHD Award please visit Eurographics Annual Award for Best PhD Thesis.
Eurographics PHD Award Winners (under construction)
Browse
Browsing Graphics Dissertation Online by Issue Date
Now showing 1 - 20 of 423
Results Per Page
Sort Options
Item Accelerating Geometric Queries for Computer Graphics: Algorithms, Techniques, and Applications(0000-08-16) Evangelou IordanisIn the ever-evolving context of Computer Graphics, the demand for realistic and real-time virtual environments and interaction with digitised or born-digital content has exponentially grown. Whether in gaming, production rendering, computer-aided design, reverse engineering, geometry processing, and understanding or simulation tasks, the ability to rapidly and accurately perform geometric queries of any type is crucial. The actual form of a geometric query varies depending on the task at hand, application domain, input representation, and adopted methodology. These queries may involve intersection tests as in the case of ray tracing, spatial queries, such as needed for recovering nearest sample neighbours, geometry registration in order to classify polygonal primitive inputs, or even virtual scene understanding in order to suggest and embed configurations as in the case of light optimisation and placement. As the applications of these algorithms and, consequently, their complexity continuously grow, traditional geometric queries fall short, when naïvely adopted and integrated in practical scenarios. Therefore, these methods face limitations in terms of computational efficiency and query bandwidth. This is particularly pronounced in scenarios, where vast amounts of geometric data must be processed in interactive or even real-time rates. More often than not, one has to inspect and understand the internal mechanics and theory of the algorithms invoking these geometric queries. This is particularly useful in order to devise appropriately tailored procedures to the underline task, hence maximise their efficiency, both in terms of performance and output quality. As a result, there is an enormous area of research that explores innovative approaches to geometric query acceleration, addressing the challenges posed. The primary focus of this research was to develop innovative methods for accelerating geometric queries within the domain of Computer Graphics. This entails a comprehensive exploration of algorithmic optimisations that include the development of advanced data structures and neural network architectures, tailored to efficiently handle geometric collections. This research addressed not only the computational complexity of individual queries, but also the adaptability of the proposed solutions to diverse applications and scenarios primary within the realm of Computer Graphics but also intersecting domains. The outcome of this research holds the potential to influence the fields that adopt these geometric query methodologies by addressing the associated computational challenges and unlocking novel directions for real-time rendering, interactive simulation, and immersive virtual experiences. More specifically, the contributions of this thesis are divided into two broad directions for accelerating geometric queries: a) global illumination-related, hardware-accelerated nearestneighbour queries and b) application of deep learning to the definition of novel data structures and geometric query methods. In the first part, we consider the task of real-time global illumination using photon density estimators. In particular we investigate scenarios where complex illumination effects, such as caustics, that can mainly be handled from the illumination theory regarding progressive photon mapping algorithms, require vast amount of rays to be traced from both the eye sensor and the light sources. Photons emanating from lights are cached into the surface geometry or volumetric media and must be gathered at query locations on the paths traced from the camera sensor. To achieve real-time frame rates, gathering, an expensive operation, needs to be efficiently handled. This is accomplished by adapting screen space ray tracing and splatting to the hardware-accelerated rasterisation pipeline. Since the gathering phase is an inherent subcategory of nearest neighbours search, we also propose how to efficiently generalise this concept to any form of task by exploiting existing low-level hardware accelerated ray tracing frameworks. Effectively boosting the inference phase by orders of magnitude compared to the traditional strategies involved. In the second part, we shift our focus to a more generic class of geometric queries. The first work involves accurate and fast shape classification using neural networks architectures. We demonstrate that a hybrid architecture, which processes orientation and a voxel-based representation of the input, is capable of processing hard-to-distinguish solid geometry from the context of building information models. Second, we consider the form of geometric queries in the context of scene understanding. More precisely, optimising the placement and light intensities of luminaries in urban places can be a computationally intricate task especially for large inputs and conflicting constraints. Methodologies employed in the literature usually make assumptions about the input representation to mitigate the intractable nature of this task. In this thesis, we approach this problem with a holistic solution that can produce feasible and diverse proposals in real time by adopting a neural-based generative modelling methodology. Finally, we propose a novel and general approach to solve recursive cost evaluators for the construction of geometric query acceleration data structures. This work establishes a new research direction for the construction of data structures guided by recursive cost functions using neural-based architectures. Our goal is to overcome the exhaustive but intractable evaluation of the cost function, in order to generate a high-quality data structure for spatial queries.Item Processing Semantically EnrichedContent for Interactive 3D Visualizations(Settgast, 13-05-28) Settgast, VolkerInteractive 3D graphics has become an essential tool in many fields of application: In manufacturingcompanies, e.g., new products are planned and tested digitally. The effect of newdesigns and testing of ergonomic aspects can be done with pure virtual models. Furthermore,the training of procedures on complex machines is shifted to the virtual world. In that waysupport costs for the usage of the real machine are reduced, and effective forms of trainingevaluation are possible.Virtual reality helps to preserve and study cultural heritage: Artifacts can be digitalized andpreserved in a digital library making them accessible for a larger group of people. Variousforms of analysis can be performed on the digital objects which are hardly possible to performon the real objects or would destroy them. Using virtual reality environments like large projectionwalls helps to show virtual scenes in a realistic way. The level of immersion can be furtherincreased by using stereoscopic displays and by adjusting the images to the head position ofthe observer.One challenge with virtual reality is the inconsistency in data. Moving 3D content from a usefulstate, e.g., from a repository of artifacts or from within a planning work flow to an interactivepresentation is often realized with degenerative steps of preparation. The productiveness ofPowerwalls and CAVEsTM is called into question, because the creation of interactive virtualworlds is a one way road in many cases: Data has to be reduced in order to be manageable bythe interactive renderer and to be displayed in real time on various target platforms. The impactof virtual reality can be improved by bringing back results from the virtual environment to auseful state or even better: never leave that state.With the help of semantic data throughout the whole process, it is possible to speed up thepreparation steps and to keep important information within the virtual 3D scene. The integratedsupport for semantic data enhances the virtual experience and opens new ways of presentation.At the same time the goal becomes feasible to bring back data from the presentation for examplein a CAVETM to the working process. Especially in the field of cultural heritage it isessential to store semantic data with the 3D artifacts in a sustainable way.Within this thesis new ways of handling semantic data in interactive 3D visualizations arepresented. The whole process of 3D data creation is demonstrated with regard to semanticsustainability. The basic terms, definitions and available standards for semantic markups aredescribed. Additionally, a method is given to generate semantics of higher order automatically.An important aspect is the linking of semantic information with 3D data. The thesis gives twosuggestions on how to store and publish the valuable combination of 3D content and semanticmarkup in a sustainable way.Different environments for virtual reality are compared and their special needs are pointed out.Primarily the DAVE in Graz is presented in detail, and novel ways of user interactions in suchviiviii Abstractimmersive environments are proposed. Finally applications in the field of cultural heritage, securityand mobility are presented.The presented symbiosis of 3D content and semantic information is an important contributionfor improving the usage of virtual environments in various fields of applications.Item Real-time Illustrative Visualization of Cardiovascular Hemodynamics(Van Pelt, 13-06-2012) Van Pelt, Roy F. P.Modern magnetic resonance imaging techniques enable acquisition of time-resolved volumetric blood-flow velocity data. With these data, physicians aim for newfound insight into the intricate blood-flow dynamics. This conceivably leads to improved diagnosis and prognosis of cardiovascular diseases, as well as a better assessment of treatments and risks. We facilitate the time-consuming and challenging process of visual analysis of these unsteady, multi-dimensional and multi-valued data by comprehensive exploratory visualization techniques, tailored to communicate blood flow in the heart and the thoracic arteries. We introduce abstraction techniques to reduce the abundance of information contained in the data. Interactive exploration is enabled by probing tools, selecting regions-of-interest that serve as a basis for our real-time illustrative visualizations. Based on evaluation studies with the involved physicians, we believe that real-time visual exploration of blood-flow data facilitates the qualitative analysis.Item Process-Based Design of Multimedia Annotation Systems(Hofmann, 06.12.2010) Hofmann, Cristian ErickAnnotation of digital multimedia comprises a range of different application scenarios,supported media and annotation formats, and involved techniques. Accordingly, recentannotation environments provide numerous functions and editing options. This resultsin complexly designed user interfaces, so that human operators are disoriented withrespect to task procedures and the selection of accurate tools.In this thesis we contribute to the operability of multimedia annotation systems in severalnovel ways. We introduce concepts to support annotation processes, at whichprinciples of Workflow Management are transferred. Particularly focusing on the behaviorof graphical user interface components, we achieve a significant decrease ofuser disorientation and processing times. In three initial studies, we investigate multimedia annotation from two differentperspectives. A Feature-oriented Analysis of Annotation Systems describesapplied techniques and forms of processed data. Moreover, a conducted EmpiricalStudy and Literature Survey elucidate different practices of annotation,considering case examples and proposed workflow models. Based on the results of the preliminary studies, we establish a Generic ProcessModel of Multimedia Annotation, summarizing identified sub-processes andtasks, their sequential procedures, applied services, as well as involved data formats. By a transfer into a Formal Process Specification we define information entitiesand their interrelations, constituting a basis for workflow modeling, and declaringtypes of data which need to be managed and processed by the technicalsystem. We propose a Reference Architecture Model, which elucidates the structure andbehavior of a process-based annotation system, also specifying interactions andinterfaces between different integrated components. As central contribution of this thesis, we introduce a concept for Process-drivenUser Assistance. This implies visual and interactive access to a given workflow,representation of the workflow progress, and status-dependent invocationof tools.We present results from a User Study conducted by means of the so-called SemAnnotframework. We implemented this novel framework based on our considerationsmentioned above. In this study we show that the application of our proposed conceptfor process-driven user assistance leads to strongly significant improvements ofthe operability of multimedia annotation systems. These improvements are associatedwith the partial aspects efficiency, learnability, usability, process overview, and usersatisfaction.Item Feature Centric Volume Visualization(Malik, 11.12.2009) Malik, Muhammad MuddassirThis thesis presents techniques and algorithms for the effective exploration of volumetric datasets. The Visualization techniques are designed to focus on user specified features of interest. The proposed techniques are grouped into four chapters namely feature peeling, computation and visualization of fabrication artifacts, locally adaptive marching cubes, and comparative visualization for parameter studies of dataset series. The presented methods enable the user to efficiently explore the volumetric dataset for features of interest.Feature peeling is a novel rendering algorithm that analyzes ray profiles along lines of sight. The profiles are subdivided according to encountered peaks and valleys at so called transition points. The sensitivity of these transition points is calibrated via two thresholds. The slope threshold is based on the magnitude of a peak following a valley, while the peeling threshold measures the depth of the transition point relative to the neighboring rays. This technique separates the dataset into a number of feature layers.Fabrication artifacts are of prime importance for quality control engineers for first part inspection of industrial components. Techniques are presented in this thesis to measure fabrication artifacts through direct comparison of a reference CAD model with the corresponding industrial 3D X-ray computed tomography volume. Information from the CAD model is used to locate corresponding points in the volume data. Then various comparison metrics are computed to measure differences (fabrication artifacts) between the CAD model and the volumetric dataset. The comparison metrics are classified as either geometry-driven comparison techniques or visual-driven comparison techniques.The locally adaptive marching cubes algorithm is a modification of the marching cubes algorithm where instead of a global iso-value, each grid point has its own iso-value. This defines an iso-value field, which modifies the case identification process in the algorithm. An iso-value field enables the algorithm to correct biases within the dataset like low frequency noise, contrast drifts, local density variations, and other artifacts introduced by the measurement process. It can also be used for blending between different iso-surfaces (e.g., skin, and bone in a medical dataset).Comparative visualization techniques are proposed to carry out parameter studies for the special application area of dimensional measurement using industrial 3D X-ray computed tomography. A dataset series is generated by scanning a specimen multiple times by varying parameters of the scanning device. A high resolution series is explored using a planar reformatting based visualization system. A multi-image view and an edge explorer are proposed for comparing and visualizing gray values and edges of several datasets simultaneously. For fast data retrieval and convenient usability the datasets are bricked and efficient data structures are used.Item Selected Quality Metrics for Digital Passport Photographs(Gonzalez Castillo, 12.12.2007) Gonzalez Castillo, Oriana YuridiaFacial images play a significant role as biometric identifier. The accurate identification of individuals is nowadays becoming more and more important and can have a big impact on security. The good quality of facial images in passport photographs is essential for accurate identification of individuals. The quality acceptance procedure presently used is based on human visual perception and thus subjective and not standardized. Existing algorithms for measuring image quality are applied for all types of images not focused on the quality determination of passport photographs. However there are few documents existing, defining conformance requirements for the determination of digital passport photographs quality. A major document is named "Biometrics Deployment of Machine Readable Travel Documents", published by the International Civil Aviation Organization (ICAO). This thesis deals with the development of metrics for the automated determination of the quality and grade of acceptance of digital passport photographs without having any reference image available. Based on the above mentioned document of the ICAO, quality conformance sentences and related attributes are abstracted with self-developed methods. About fifty passport photographs haven been taken under strictly controlled conditions to fulfill all requirements given by the above mentioned document. Different kinds of algorithms were implemented to determine values for image attributes and to detect the face features. This ground truth database was the source to "translate" natural language into numeric values to describe how "good quality" is represented by numbers. No priority for the evaluation of attributes was given in the ICAO document. For that reason an international online and on-site survey was developed to explore the opinion of user experts whose work is related to passport photographs. They were asked to evaluate the relevance of different types of attributes related to a passport photograph. Based on that survey, weights for the different types of attributes have been calculated. These weights express the different importances of the attributes for the evaluation process. Three different metrics, expressed by the Photograph-/Image-/Biometric Attributes-Quality Indexes (PAQI, IAQI, BAQI) have been developed to obtain reference values for the quality determination of a passport photograph. Experiments are described to show, that the quality of a selected digital passport photograph can be measured and different attributes, which have an impact on the quality and on the recognition of face features can be identified. Critical issues are discussed and the thesis closes with recommendations given for further research approaches.Item Filament-Based Smoke(Weißmann, 15. 9. 2010) Weißmann, SteffenThis cumulative dissertation presents a complete model for simulating smoke usingpolygonal vortex filaments. Based on a Hamiltonian system for the dynamics ofsmooth vortex filaments, we develop an effcient and robust algorithm that allowssimulations in real time. The discrete smoke ring ow allows to use coarse polygonalvortex filaments, while preserving the qualitative behavior of the smooth system. Themethod handles rigidly moving obstacles as boundary conditions and simulates vortexshedding. Obstacles as well as shed vorticity are also represented as polygonal filaments.Variational vortex reconnection prevents the exponential increase of filamentlength over time, without significant modification of the uid velocity field. Thisallows for simulations over extended periods of time. The algorithm reproduces variousreal experiments (colliding vortex rings, wakes) that are challenging for classicalmethods.Item Schnelle Kurven- und Flächendarstellung auf grafischen Sichtgeräten(1974-09-05) Straßer, WolfgangFür die Anwendung beim interaktiven Entwurf von Kurven und Flächen auf grafischen Sichtgeräten werden im ersten Teil der Arbeit bekannte mathematische Verfahren in einer kompakten, einheitlichen Matrizenschreibweise dargestellt. Als neues Verfahren wird die B-Spline-Approximation hinsichtlich ihrer Eigenschaften und Möglichkeiten für den rechnergestützten Entwurf untersucht und an Beispielen erläutert. Die B Spline - Approximation erweist sich nicht nur als das universellste und am leichtesten zu handhabende mathematische Verfahren, sondern ist auch für eine Hardware - Erzeugung von Kurven und Flächen am besten geeignet. Ein neues Verfahren zur Schattierung von Flächen in Echtzeit wird angegeben und durch Bilder belegt. Im zweiten Teil werden unter Berücksichtigung neuer Bauelemente digitale Komponenten für einen Displayprozessor angegeben: Vektorgenerator, Kreisgenerator, Matrizenmultiplizierer, Dividierer, Kurven - und Flächengenerator.Item Tone Mapping Techniques and Color Image Difference in Global Illumination(Matkovic, 1997) Matkovic, KresimirTone Mapping (Farbtransformation) ist der letzte Schritt eines jeden photorealistischen Bilderzeugungsverfahrens. Aufgrund der Nichtlinearit¨ aten und der vorhandenen Einschr¨ankungen des Farbraums und des dynamischen Verhaltens des zur Darstellung verwendeten Ger¨ats ist es n¨otig, eine Farbtransformation auf die berechneten Farbwerte anzuwenden. Wir beschreiben den Stand der Forschung im Bereich Transformationsmethoden und einige neue Methoden. Der Hauptbeitrag dieser Arbeit liegt in der interaktiven Abgleichung von Kontrast und Blende sowie inMethoden mit minimalem Informationsverlust und Messung des einfallenden Lichts. Die interaktive Abgleichung erm¨ oglicht die Darstellung der Szene mit einer gew¨unschten Beleuchtungs-Stimmung, selbst wenn die Beleuchtungswerte in fiktiven Einheiten berechnet wurden. DieMethoden mit minimalem Informationsverlust basieren in gewisserWeise auf dem Ansatz des Photographen. Die Farbtransformation wird nur auf ein bestimmtes Farbintervall angewandt, welches automatisch gew¨ahlt wird. Der urspr ¨ungliche Kontrast aller Pixel in diesem Intervall bleibt erhalten. Dar¨ uberhinaus ist die auf Fehlerbeschrnkungen basierende Methode eine Erweiterung von Schlicks Verfahren. Die Methode zur Messung des einfallenden Lichts basiert ebenfalls auf einer in der Photographie ¨ublichen Vorgangsweise. Diese Methode erm¨ oglicht die genaue Reproduktion von Farben. Selbst wenn die durchschnittliche Reflexion einer Szene sehr klein oder groß ist, wird diese Methode die urspr¨unglichen Farben reproduzieren k¨onnen, eine Eigenschaft, die konkurrierendenMethoden fehlt. Die Grundidee ist, die einfallende Beleuchtung durch simulierte Lichtmessung in der Szene zu messen und die daraus resultierende Skalierung auf die berechneten Farbwerte anzuwenden. Neben diesen eigenen Beitr¨agen werden andere relevante Ans¨atze besprochen. Wir beschreiben die Transformation von Tumblin und Rushmeier, die Kontrastbasierte Methode von Ward, das weitverbreitete Verfahren mit Durchschnittsbildung, den exponentiellen Ansatz von Ferschin et al., Schlicks Abbildung, ein auf Sichtbarkeit beruhendes Verfahren zur Farbtransformation von Larson et al. und einen visuelle Adaption ber¨ucksichtigenden Ansatz von Ferwerda et al. Leider gibt es keine letztg¨ultige L¨osung zur Farbtransformation. Jede Methode hat St¨arken und Schw¨achen, und der Benutzer sollte die geignete Methode w¨ahlen k¨onnen. Die Arbeit endet mit der Pr¨asentation eines Algorithmus zur Berechnung der Farbbilddifferenz. Eine gute Metrik zur Bewertung des Farbabstandes zweier Bilder wird in der Computergraphik oft ben¨ otigt, ist aber nicht leicht zu konstruieren. Die in dieser Arbeit beschriebene Metrik beruht auf der menschlichenWahrnehmung und arbeitet im Bildbereich. Eine Fourier- oderWavelet- Transformation ist daher nicht n¨ otig, was das Verfahren schnell und intuitivmacht. DieseMethode ist die einzige, die explizit den Abstand des Beobachters zum Bild in Betracht zieht. - Tone mapping is the final step of every rendering process. Due to display devices nonlinearities, reduced color gamuts and moderate dynamic ranges it is necessary to apply some mapping technique on the computed radiances. We described mapping methods that are considered to be state of the art today, and some newly developed techniques. The main contributions of this thesis in tone mapping techniques are interactive calibration of contrast and aperture, minimum information loss methods and incident light metering. The interactive calibration technique makes it possible to display a desired scene lighting atmosphere if the radiance values are rendered in fictitious units. Minimum information loss techniques are based, in a way, on the photographers approach. The mapping function is applied only on a certain radiance interval, which is chosen automatically. The original contrast of all pixels inside the interval is preserved. Furthermore, the bounded error version of the minimum loss method is an extension of Schlick s method. The incident light metering method was inspired by the photographers approach, too. This method makes it possible to reproduce original colors faithfully. Even if the average reflectance of a scene is very low, or very high, this method will reproduce original colors, which is not the case with other methods. The idea is to measure the incident light using diffusors in the scene, and then to compute a scale factor based on the incident light and apply this scale factor on the computed radiances. Beside these, other tone mapping techniques are described in this work. We described Tumblin and Rushmeier s mapping, Ward s contrast based scale factor, the widely used mean value mapping, an exponential mapping introduced by Ferschin et al., Schlick s mapping, a visibility matching tone operator introduced by Larson et al., and a model of visual adaptation proposed by Ferwerda et al. Unfortunately there is no ultimative solution to the tone mapping problem. Every method has its strengths and weaknesses, and the user should choose a method according to his or her needs. Finally, this thesis ends with a color image difference algorithm. Agood image metric is often needed in computer graphics. The method described here is a perception based metric that operates in the original image space (there is no need for Fourier or wavelet transform), what makes the whole method fast and intuitive. This is the only method that stresses distance dependency explicitly.Item The Remote Rendering Pipeline - Managing Geometry and Bandwidth in Distributed Virtual Environments(Schmalstieg, 1997) Schmalstieg, DieterDer Beitrag dieser Dissertation liegt dort, wo sich interaktive dreidimensionale Computergraphik und verteilte Systeme überschneiden. Virtual Environments beschäftigen sich mit der glaubhaften Simulation von virtuellen Welten. Der vielversprechendste Aspekt dieser Methode ist die Möglichkeit, Menschen in virtuellen Umgebungen zusammenzuführen. Gegen die derzeitigen Beschränkungen in Netzwerkdurchsatz und -bandbreite können Algorithmen angepaßt und erweitert werden, die bereits für interaktive Graphik erfolgreich eingesetzt werden. In diesem Zusammenhang wird das Konzept der Remote Rendering Pipeline vorgestellt, einer Erweiterung der traditionellen Rendering Pipeline, die eine Netzwerkübertragung der Graphikdaten miteinschließt. Die Optimierung der Abschnitte dieser Pipeline ergibt ein Graphiksystem, das komplexe und interessante virtuelle Welten darstellen kann. Nach einer Erörterung des Standes der Forschung in den Bereichen interaktive dreidimensionale Graphik und verteilte virtuelle Welten wird die Remote Rendering Pipeline vorgestellt, ein konzeptuelles Modell für verteilte graphische Systeme. Die Elemente dieser Pipeline werden in den folgenden Kapiteln beschrieben: das bedarfsgesteuerte Geometrie-Übertragungs-Protokoll, eine Strategie zur Verwaltung von teilweise replizierten geometrischen Datenbanken ein Octreebasierter Detailstufengenerator Smooth Levels of Detail, eine neue Datenstruktur für die schrittweise Codierung und Übertragung von polygonalen Objekten und ein Modellierung- und Darstellungs-Softwarewerkzeug für gerichtete zyklische Graphen, die eine kompakte Darstellung einer wichtigen Klasse von natürlichen Phänomenen gestattet. - The contribution of this thesis is at the place where interactive 3-D computer graphics and distributed systems meet. Virtual Environments are concerned with the convincing simulation of a virtual world. One of the most promising aspects of this approach lies in its potential as a way of bringing people together, as a virtual meeting place. To overcome current restrictions in network performance and bandwidth, techniques that have already been used for improving the rendering performance for virtual reality applications can be adopted and enhanced. In this context, we develop the concept of the Remote Rendering Pipeline, that extends the traditional rendering pipeline for interactive graphics to include the network transmission of geometry data. By optimizing the steps of the Remote Rendering Pipeline, and combining these improvements, a system that is better prepared to deal with complex and interesting virtual worlds emerges. After a discussion of the state of the art in the fields of interactive 3-D graphics and distributed virtual environments, the Remote Rendering Pipeline is introduced, a conceptual model of rendering in distributed systems. Its elements are discussed in the following chapters: the demanddriven geometry transmission protocol, a strategy for managing partially replicated geometry databases for virtual environments an octree-based level of detail generator smooth levels of detail, a novel data structure for incremental encoding and transmission of polygonal objects and a modeling and rendering toolkit for directed cyclic graphs allowing a compact representation of a large class of natural phenomena.Item Efficient Object-Based Hierarchical Radiosity Methods(Schäfer, Stephan, 2000-05-30) Schäfer, StephanThe efficient generation of photorealistic images is one of the main subjects in the field of computer graphics. In contrast to simple image generation which is directly supported by standard 3D graphics hardware, photorealistic image synthesis strongly adheres to the physics describing the flow of light in a given environment. By simulating the energy flow in a 3D scene global effects like shadows and inter-reflections can be rendered accurately. The hierarchical radiosity method is one way of computing the global illumination in a scene. Due to its limitation to purely diffuse surfaces solutions computed by this method are view independent and can be examined in real-time walkthroughs. Additionally, the physically based algorithm makes it well suited for lighting design and architectural visualization. The focus of this thesis is the application of object-oriented methods to the radiosity problem. By consequently keeping and using object information throughout all stages of the algorithms several contributions to the field of radiosity rendering could be made. By introducing a new meshing scheme, it is shown how curved objects can be treated efficiently by hierarchical radiosity algorithms. Using the same paradigm the radiosity computation can be distributed in a network of computers. A parallel implementation is presented that minimizes communication costs while obtaining an efficient speedup. Radiosity solutions for very large scenes became possible by the use of clustering algorithms. Groups of objects are combined to clusters to simulate the energy exchange on a higher abstraction level. It is shown how the clustering technique can be improved without loss in image quality by applying the same data-structure for both, the visibility computations and the efficient radiosity simulation.Item Real-Time Volume Visualization on Low-End Hardware(Mroz, 2001-05) Mroz, LukasDie Visualisierung von Volumsdaten ist ein wichtiges Werkzeug zur Untersuchung und Präsentation von Daten innerhalb zahlreicher Anwendungsgebiete. Bildgebende Verfahren innerhalb der Medizin sowie numerische Simulationen, liefern beispielsweise Datenmengen, die ohne Visualisierung im 3D kaum zu bewältigen wären. Die Möglichkeit interaktiver Manipulation von Volumsdaten auf desktop-PCs ist insbesondere in Hinblick auf Anwendungen in der Telemedizin, kollaborativer Visualisierung sowie der Visualisierung über das Internet von großer Bedeutung. Eine Vorbedingung für interaktive, softwarebasierte Volumsdarstellung ist ef- fizientes Ausschließen von nicht relevanten Volumsbereichen von der Projektion (nicht relevant sind Daten, die keinen sichtbaren Einfluß auf das Ergebnis der Visualisierung haben). In dieser Arbeit wird ein diesbezügliches Verfahren vorgestellt, das ein ¨ Uberspringen nicht relevanter Voxel während der Darstellung faktisch ohne zusätzlichen Aufwand ermöglicht. Dazu werden während eines Vorverarbeitungsschritts potentiell relevante Voxel identifiziert und in einer abgeleiteten Datenstruktur gespeichert. Durch entsprechende Anordnung der Voxel innerhalb dieser Datenstruktur, können durch interaktive Veränderung von Visualisierungsparametern irrelevant gewordene Voxel ebenfalls effizient übergangen werden. Zusammen mit einer schnellen, shear/warp-basierten Projektion und einer auf flexibler Kombination von Look-up-Tabellen basierenden Schattierung, können die derart vorbereiteten Volumendaten interaktiv auf PC Hardware dargestellt werden. Da innerhalb der extrahierten Voxeldaten einzelne segmentierte Objekte unterschieden werden können, bietet das Verfahren weitreichende Flexibilität bei der Visualisierung: Wegschneiden einzelner Objekte, individuelle optische Parameter, Transferfunktionen und Beleuchtungsmodelle, sowie die objektweise Wahl des Compositing-verfahrens. Eine effiziente Kompression der Voxeldaten ermöglicht den Einsatz des Verfahrens zur Visualisierung über Netzwerke mit geringer Bandbreite, wie z.B. das Internet. - Volume visualization is an important tool for investigating and presenting data within numerous fields of application. Medical imaging modalities and numerical simulation applications, for example, produce huge amounts of data, which can be effectively viewed and investigated in 3D. The ability to interactively work with volumetric data on standard desktop hardware is of utmost importance for telemedicine, collaborative visualization, and especially for Internet-based visualization applications. The key to interactive but software-based volume rendering is an efficient approach to skip parts of the volume which do not contribute to the visualization results due to the visualization mapping in use and current parameter settings. In this work, an efficient way of skipping non-contributing parts of the data is presented. Skipping is done at a negligible effort by extracting just potentially contributing voxels from the volume during a preprocessing step, and by storing them in a derived enumeration-like data structure. Within this structure, voxels are ordered in a way which is optimized for the chosen compositing technique, and which allows to efficiently skip further voxels as they become irrelevant for the result due to changes to the visualization parameters. Together with a fast shear/warp-based rendering and flexible shading based on look-up tables, this approach can be used to provide interactive rendering of segmented volumes, featuring object-aware clipping, transfer functions, shading models, and compositing modes defined on a per-object basis, even on standard desktop hardware. In combination with a space-efficient encoding of the enumerated voxel data, the approach is well-suited for visualization over low-bandwidth networks, like the Internet.Item Shape Spaces from Morphing(Alexa, 2002) Marc AlexaIn dieser Arbeit werden Methoden zur Repr¨asentation der Gestalt oder Form von Objekten vorgestellt. Die Grundidee ist, die Form eines Objektes als Mischung anderer vorgegebener Formen zu beschreiben. Dazu wird das mathematische Konzept linearer R¨aume verwendet: Einige Objekte bilden die Basis eines Raumes, und deren Kombination erzeugt die Elemente dieses Raumes. Diese Art der Beschreibung hat zwei Vorteile gegen¨uber der weit verbreiteten absoluten Repr¨asentation: Sie ist kompakt, wenn die Anzahl der Basen klein im Vergleich zur geometrischen Komplexit¨at der Objekte ist. Sie ist deskriptiv, wenn die Basisformen eine Semantik haben, da dann die Anteile an diesen Basisformen das Objekt beschreiben. Zur Darstellung der Basisformen werden hier polygonale Netze verwendet. Die Arbeit besch¨aftigt sich daher mit der Kombination gegebener Polygonnetze und verschiedenen Anwendungen, die bei dieser Art der graphischen Modellbeschreibung auf der Hand liegen. Die Transformation eines gegebenen Objektes in ein anderes wird in der graphischen Datenverarbeitung Morphing genannt. Das Ergebnis dieser Transformation kann in der hier verwendeten Terminologie als ein ein-dimensionaler Raum verstanden werden. Durch weitere Transformationen mit zus¨atzlichen Basisformen ergeben sich h¨oher-dimensionale R¨aume. Zum gegenw¨artigen Zeitpunkt sind Morphing-Verfahren f¨ur polygonale Netze wegen topologischen und geometrischen Problemen noch verbesserungsbed¨urftig, weshalb sich der erste Teil dieser Arbeit mit solchen Verfahren befasst. Diese Morphing-Verfahren werden dann so erweitert, dass sie die Kombination von mehr als zwei Netzen erlauben. Die N¨utzlichkeit dieser Beschreibung von Gestalt wird an Hand von zwei Szenarien demonstriert: Zur Visualisierung von Multiparameter-Informationsdaten, wobei die Parameter auf Glyphen abgebildet werden und zur effizienten Speicherung und U¨ bermittelung von geometrischen Animationen.Item A Head Model with Anatomical Structure for Facial Modeling and Animation(2003) Kähler, KoljaIn this dissertation, I describe a virtual head model with anatomical structure. The model is animated in a physics-based manner by use of muscle contractions that in turn cause skin deformations; the simulation is ef cient enough to achieve real-time frame rates on current PC hardware. Construction of head models is eased in my approach by deriving new models from a prototype, employing a deformation method that reshapes the complete virtual head structure. Without additional modeling tasks, this results in an immediately animatable model. The general deformation method allows for several applications such as adaptation to individual scan data for creation of animated head models of real persons. The basis for the deformation method is a set of facial feature points, which leads to other interesting uses when this set is chosen according to an anthropometric standard set of facial landmarks: I present algorithms for simulation of human head growth and reconstruction of a face from a skull.Item Graphical Abstraction and Progressive Transmission in Internet-based 3DGeoinformationsystems(Volker Coors, 2003) Volker CoorsThe aim of this thesis was to eliminate essential deficits in the use of previous 3D GIS in an openly distributed GIS infrastructure. The increasing availability of three-dimensional city models poses a special problem, because previous GIS were not able to use such a database sensibly. An additional difficulty occurs in computer graphical representations, where problems exist when generating a meaningful, interactive, three-dimensional presentation of these large models in a heterogeneous network environment. The use of three-dimensional geodata in new applications like Location Based Services is only feasible when these basic technical problems are solved. This chapter recapitulates the results of the thesis and their significance for GIS research and summarizes the applications. The perspective on future work in this research field is developed subsequently.Item Alves dos Santos, Luiz: Asymmetric and Adaptive Conference Systems for Enabling Computer-Supported Mobile Activities(Alves dos Santos, Luiz Manoel, 2003) Alves dos Santos, Luiz ManoelThis work was conducted at the Darmstadt University of Technology, essentially between 1998 and 2002. Before and during this period, I was working at the INI-GraphicsNet, Darmstadt, first in the Zentrum für Graphische Datenverarbeitung e.V., and then later at the Fraunhofer-Institut für Graphische Datenverarbeitung (IGD), as a researcher. This thesis addresses the investigations and results achieved during my work at these organizations. My initial development projects in the area of mobile computing were very challenging due to the immense constraints posed by the then incipient hardware and wireless network infrastructures, and similarly overwhelming due to the desire to employ those fascinating appliances by all means possible. The endeavour to keep the respective application systems in a course of continuous improvement (i.e., with richer media presentation and “interactiveness”), and at the same astonishing pace as the technological evolutions, was both demanding and rewarding; however, it turned out to be a questionable procedure. After several prototype demonstrations and observations, there came a turning point, following the acknowledgement that, for application cases involving user mobility, the supporting tool is appraised significantly on the basis of its adequacy for the usage conditions and its flexibility to adapt to changing requirements and to any platform specification or resource availability. The circumstances of a mobile use (e.g., outdoor, on the move, in confined places) require new approaches in application system development and create a high demand for specialized, task-oriented system features. Any service being offered has to be able to account for, adjust itself, and be responsive to the increasing and unpredictable diversity of prospective users and their usage environments. The achievement of this attribute is even more challenging when the service should be a basis for a digitally mediated human-to-human communication process involving all kinds of diversity between the individual partners and technical arrangements. In this thesis work, proposals and innovative solutions to these challenges have been investigated and implemented, and are presented in this report. Some contributions of this work are: an adaptive conference system for heterogeneous environments, tools to assess, distribute, and respond to User Profiles at both the individual and collective level; adaptive, flexible individual interaction modes and media that are nevertheless consistent for a collaborative work; and mechanisms for remote awareness (of constraints) for structuring interaction. However, above any technological advances, the major research challenge was concerned with the human factor and the achievement of an effective integration of a supporting tool in their daily activities and lives.Item Efficient, Image-Based Appearance Acquisitionof Real-World Objects(Lensch, Hendrik Peter Asmus, 2003-12-15) Lensch, Hendrik Peter AsmusTwo ingredients are necessary to synthesize realistic images: an accurate rendering algorithm and, equally important, high-quality models in terms of geometry and reflection properties. In this dissertation we focus on capturing the appearance of real world objects. The acquired model must represent both the geometry and the reflection properties of the object in order to create new views of the object with novel illumination. Starting from scanned 3D geometry, we measure the reflection properties (BRDF) of the object from images taken under known viewing and lighting conditions. The BRDF measurement require only a small number of input images and is made even more efficient by a view planning algorithm. In particular, we propose algorithms for efficient image-to-geometry registration, and an image-based measurement technique to reconstruct spatially varying materials from a sparse set of images using a point light source. Moreover, we present a view planning algorithm that calculates camera and light source positions for optimal quality and efficiency of the measurement process. Relightable models of real-world objects are requested in various fields such as movie production, e-commerce, digital libraries, and virtual heritage.Item Efficient Acquisition, Representation, and Rendering of Light Fields(Schirmacher, Hartmut, 2003-12-16) Schirmacher, HartmutIn this thesis we discuss the representation of three-dimensional scenes using image data (image-based rendering), and more precisely the so-called light field approach. We start with an up-to-date survey on previous work in this young field of research. Then we propose a light field representation based on image data and additional per-pixel depth values. This enables us to reconstruct arbitrary views of the scene in an efficient way and with high quality. Furthermore, we can use the same representation to determine optimal reference views during the acquisition of a light field. We further present the so-called free form parameterization, which allows for a relatively free placement of reference views. Finally, we demonstrate a prototype of the Lumi-Shelf system, which acquires, transmits, and renders the light field of a dynamic scene at multiple frames per second.Item High-Quality Visualization and Filtering(Hadwiger, 2004) Hadwiger, MarkusDie meisten Renderingmethoden in der Visualisierung und Computergraphik konzentrieren sich entweder auf die Bildqualität; und generieren <i>korrekte</i> Bilder mit nicht mehr interaktiven Bildraten; oder opfern die Darstellungsqualität; um interaktive Performance zu erreichen. Andererseits erlaubt es die momentane Entwicklung im Bereich der Graphikhardware zunehmend; die Qualität von Offline Rendering-Ansätzen mit interaktiver Performance zu kombinieren. Um dies auch tatsächlich nutzen zu können; müssen neue und angepasste Algorithmen entwickelt werden; die die spezielle Architektur von Graphikhardware berücksichtigen. Das zentrale Thema dieser Arbeit ist; hohe Renderingqualität mit Echtzeitfähigkeit bei der Visualisierung von diskreten Volumendaten auf regulären dreidimensionalen Gittern zu kombinieren. Ein wesentlicher Teil beschäftigt sich mit dem generellen Filtern von Texturen unabhängig von deren Dimension. Mit Hilfe der Leistungsfähigkeit heutiger PC Graphikhardware werden Algorithmen demonstriert; die einen Qualitätsstandard erreichen; der bislang nur im Offline Rendering möglich war. Eine grundlegende Operation in der Visualisierung und Computergraphik ist die Rekonstruktion einer kontinuierlichen Funktion aus einer diskreten Darstellung mittels Filterung. Diese Arbeit stellt eine Methode zur Filterung mit Hilfe von Graphikhardware vor; die prinzipiell beliebige Faltungskerne auswerten kann. Die Hauptanwendung ist hierbei die Vergrösserung von Texturen direkt während dem Rendern. Darüber hinaus kann sie aber auch mit MIP-mapping zur Texturverkleinerung kombiniert werden. Im Bereich der Volumenvisualisierung stellt diese Arbeit weiters einen Ansatz zur Echtzeitdarstellung von segmentierten Daten vor. Segmentierte Volumendaten haben speziell in medizinischen Anwendungen hohe Bedeutung. Darüber hinaus stellt diese Arbeit Ansätze zum nicht-photorealistischen Rendern mit hoher Qualität vor; die sich besonders gut eignen; um die Aufmerksamkeit des Betrachters auf bestimmte Fokusbereiche zu lenken. Weiters werden Isoflächen mit Hilfe eines Deferred- Shading Ansatzes dargestellt; wobei differentialgeometrische Eigenschaften; wie beispielsweise die Krümmung der Oberfläche; in Echtzeit berechnet und für eine Vielzahl von Effekten verwendet werden können. Wir schliessen aus den erreichten Resultaten; dass es möglich ist; die Lücke zwischen Offline Rendering mit hoher Qualität auf der einen Seite; und Echtzeitrendering auf der anderen Seite; zu schliessen; ohne dabei notwendigerweise die Qualität zu beeinträchtigen. Besonders wichtig ist dies im Bereich des Renderings von Volumendaten; das sehr oft hohe Qualitätsansprüche hat; etwa bei der Darstellung von medizinischen Daten. - Most rendering methods in visualization and computer graphics are focusing either on image quality in order to produce<i> correct</i> images with non-interactive rendering times; or sacrifice quality in order to attain interactive or even real-time performance. However; the current evolution of graphics hardware increasingly allows to combine the quality of off-line rendering approaches with highly interactive performance. In order to do so; new and customized algorithms have to be developed that take the specific structure of graphics hardware architectures into account. The central theme of this thesis is combining high rendering quality with real-time performance in the visualization of sampled volume data given on regular three-dimensional grids. More generally; a large part of this work is concerned with high-quality filtering of texture maps; regardless of their dimension. Harnessing the computational power of consumer graphics hardware available in off-the-shelf personal computers; algorithms that attain a level of quality previously only possible in off-line rendering are introduced. A fundamental operation in visualization and computer graphics is the reconstruction of a continuous function from a sampled representation via filtering. This thesis presents a method for using completely arbitrary convolution filters for high-quality reconstruction exploiting graphics hardware; focusing on real-time magnification of textures during rendering. High-quality filtering in combination with MIP-mapping is also illustrated in order to deal with texture minification. Since texturing is a very fundamental operation in computer graphics and visualization; the resulting quality improvements have a wide variety of applications; including static texture-mapped objects; animated textures; and texture-based volume rendering. The combination of high-quality filtering and all major approaches to hardwareaccelerated volume rendering is demonstrated. In the context of volume rendering; this thesis introduces a framework for high-quality rendering of segmented volume data; i.e.; data with object membership information such as segmented medical data sets. High-quality shading with per-object optical properties such as rendering modes and transfer functions is made possible; while maintaining real-time performance. The presented method is able to filter the boundaries between different objects on-the-fly; which is non-trivial when more than two objects are present; but important for high-quality rendering. Finally; several approaches to high-quality non-photorealistic volume rendering are introduced; a concept that is especially powerful in combination with segmented volume data in order to focus a viewer s attention and separate focus from context regions. High-quality renderings of isosurfaces are obtained from volumetric representations; utilizing the concept of deferred shading and deferred computation of high-quality differential implicit surface properties. These properties include the gradient; the Hessian matrix; and principal curvature magnitudes as well as directions. They allow high-quality shading and a variety of nonphotorealistic effects building on implicit surface curvature. We conclude that it is possible to bridge the gap between traditional high-quality off- line rendering and real-time performance without necessarily sacrificing quality. In an area such as volume rendering that can be very demanding with respect to quality; e.g.; in medical imaging; but whose usefulness increases significantly with higher interactivity; combining both high quality and high performance is especially important.Item Interactive 3D Flow Visualization Based on Textures and Geometric Primitives(Laramee, 2004) Laramee, Robert S.This thesis presents research in the area of flow visualization. The theoretical framework is based on the notion that flow visualization methodology can be classified into four main areas: direct, geometric, texture-based, and feature-based flow visualization. Our work focuses on the direct, geometric, and texture-based categories, with special emphasis on texture-based approaches. After presenting the state-of-the-art, we discuss a technique for resampling of CFD simulation data. The resampling tool addresses both the perceptual problems resulting from a brute force hedgehog visualization and flow field coverage problems. These challenges are handled by giving the user control of the resolution of the resampling grid in object space and giving the user precise control of where to place the vector glyphs. Afterward, we present a novel technique for visualization of unsteady flow on surfaces from computational fluid dynamics. The method generates dense representations of time-dependent vector fields with high spatio-temporal correlation. While the 3D vector fields are associated with arbitrary triangular surface meshes, the generation and advection of texture properties is confined to image space. Frame rates of up to 60 frames per second are realized by exploiting graphics card hardware. We apply this algorithm to unsteady flow on boundary surfaces of, large, complex meshes from computational fluid dynamics composed of more than 200,000 polygons, dynamic meshes with time-dependent geometry and topology, as well as medical data. We also apply texture-based flow visualization techniques to isosurfaces. The result is a combination of two well known scientific visualization techniques, namely iso-surfacing and texture-based flow visualization, into a useful hybrid approach. Next we describe our collection of geometric flow visualization techniques including oriented streamlines, streamlets, a streamrunner tool, streamcomets, and a real-time animated streamline technique. We place special emphasis on necessary measures required in order for geometric techniques to be applicable to real-world data sets. In order to demonstrate the use of all techniques, we apply our direct, geometric, and texture-based flow visualization techniques to investigate swirl and tumble motion, two flow patterns found commonly in computational fluid dynamics (CFD). Our work presents a visual analysis of these motions across three spatial domains: 2D slices, 2.5D surfaces, and 3D.