2012

Permanent URI for this collection


Passive Spatio-Temporal Geometry Reconstruction of Human Faces at Very High Fidelity

Beeler, Thabo

A Frequency Analysis of Light Transport: from Theory to Implementation

Belcour, Laurent

Quadrilateral Surface Mesh Generationfor Animation and Simulation

Bommes, David

Perceptual Display: Exceeding Display Limitations by Exploiting the Human Visual System

Didyk, Piotr

Non-Uniform Deformable Volumetric Objects for Medical Organ Segmentation and Registration

Erdt, Marius

Signal processing methods for beat tracking, music segmentation, and audio retrieval

Grosche, Peter

Topological analysis of discrete scalar data

Günther, David

Practical Real-Time Strategies for Photorealistic Skin Rendering and Antialiasing

Jimenez, Jorge

Automated methods for audio-based music analysis with applications to musicology

Konz, Verena

Intrinsic image decomposition from multiple photographs

Laffont, Pierre-Yves

Computergraphics and Nature

Neubert, Boris

Optimization Techniques For ComputationallyExpens Ive Rendering Algorithms

Navarro Gil, Luis Fernando

Meshless sampling and reconstruction of manifolds and patterns

Oeztireli, A. Cengiz

The Intrinsic Shape of Point Clouds

Ohrhallinger, Stefan

Real-time Illustrative Visualization of Cardiovascular Hemodynamics

Van Pelt, Roy F. P.

From irregular meshes to structured models

Panozzo, Daniele

Real-Time Geometry Decompression on Graphics Hardware

Meyer, Quirin

Colour videos with depth: acquisition, processing and evaluation

Richardt, Christian

Algorithms for 3D Isometric Shape Correspondence - Algorithms for 3D Isometric Shape Correspondence

Sahillioglu, Yusuf

Non-Periodic Corner Tilingsin Computer Graphics

Schlömer, Thomas

Perception-Augmenting Illumination

Solteszova, Veronika


Browse

Recent Submissions

Now showing 1 - 21 of 21
  • Item
    Passive Spatio-Temporal Geometry Reconstruction of Human Faces at Very High Fidelity
    (Beeler, 2012-09-18) Beeler, Thabo
    The creation of realistic synthetic human faces is one of the most important and at the same time also most challenging topics in computer graphics. The high complexity of the face as well as our familiarity with it renders manual creation and animation impractical. The method of choice is thus to capture both shape and motion of the human face from the real life talent. To date, this is accomplished using active techniques which either augment the face either with markers or project specific illumination patterns onto it. Active techniques currently provide the highest geometric accuracy, but they have severe shortcomings when it comes to capturing performances.In this thesis we present an entirely passive and markerless system to capture and reconstruct facial performances at un-preceded spatio-temporal resolution. The proposed algorithms compute the facial shape and motion at skin-pore resolution from multiple cameras producing per frame temporally compatible geometry.The thesis contains several contributions, both in computer vision and computer graphics. We introduce multiple capture setups that employ off-theshelf cameras and are tailored to capturing the human face. We also propose different illumination setups, including the design and construction of a multi-purpose light-stage, with capabilities that reach beyond of what is required within this thesis. The light stage contains +-500 color LEDs that can be controlled individually to produce arbitrary spatio-temporal illumination patterns. We present a practical calibration technique designed to automatically calibrate face capture setups as well as techniques to geometrically calibrate the light stage.The core contribution of this thesis is a novel multi-view stereo (MVS) algorithm that introduces the concept of Mesoscopic Augmentation. We demonstrate that this algorithm can reconstruct facial skin at quality on-par with active techniques. The system is single shot in that it requires only a single exposure per camera to reconstruct the facial geometry, which enables it to reconstruct even ephemeral poses and makes it well suited for performance capture. We extend the proposed MVS algorithm by the concept of the Episurface, which provides a plausible approximation to the true skin surface in areas where it is occluded by facial hair. We also present the first algorithm to reconstruct sparse facial hair at hair fiber resolution from a single exposure.To track skin movement over time without the use of markers we propose an algorithm that employs optical flow. To overcome inherent limitations of optical flow, such as drift, we introduce the concept of Anchor Frames, which enables us to track facial performances robustly even over long periods of time. Most optical flow algorithms assume some sort of brightness constancy. This assumption, however, is violated for deforming surfaces, as the deformation changes self-shading over time. We present a technique called Ambient Occlusion Cancelling, which leverages the reconstructed per-frame geometry to remove varying self-shading from the images. We demonstrate that this technique complements and substantially improves existing optical flow methods. In addition, we show how the varying self-shading can be used to improve the reconstructed geometry.We hope that the concepts and ideas presented in this thesis will inspire future research in the area of time-varying geometry reconstruction. Already, several concepts presented in thesis have found their way into industry to help produce the next generation CG faces in theme parks, computer games, and feature films.
  • Item
    A Frequency Analysis of Light Transport: from Theory to Implementation
    (Belcour, 2012-10-30) Belcour, Laurent
    The simulation of complex light effects such as depth-of-field, motionblur or scattering in participating media requires a tremendous amountof computation. But the resulting pictures are often blurry. We claimthat those regions should be computed sparsely to reduce their cost. Todo so, we propose a method covariance tracing that estimates the localvariations of a signal. This method is based on a extended frequencyanalysis of light transport and permits to build efficient algorithms thatdistribute the cost of low frequency parts of the simulation of lighttransport.This thesis presents an improvement over the frequency analysis of locallight-fields introduced by Durand et al. [2005]. We add into thisanalysis of light transport operations such as rough refractions,motion and participating media effects. We further improve the analysisof previously defined operations to handle non-planar occlusions oflight, anisotropic BRDFs and multiple lenses.We present covariance tracing, a method to evaluate the covariancematrix of the local light-field spectrum on a per light-path basis. Weshow that covariance analysis is defined for all the defined Fourieroperators. Furthermore, covariance analysis is compatible with MonteCarlo integration making it practical to study distributed effects.We show the use of covariance tracing with various applications rangingfrom motion blur and depth-of-field adaptive sampling and filtering,photon mapping kernel size estimation and adaptive sampling ofvolumetric effects.
  • Item
    Quadrilateral Surface Mesh Generationfor Animation and Simulation
    (Bommes, 2012-10-11) Bommes, David
    Accurately describing the geometry of objects in a digital environment, i.e. computers, isan essential ingredient in many of nowadays applications. Often it is desired to forecastthe behavior of real phenomena which depend on the geometry of objects by performinga simulation of, e.g. , a car crash, the ow around the wing of a plane, the stability of abuilding or the quality of the mobile phone network in a city to name just a few. Suchsimulations are indispensable in situations where an experiment cannot be performed asfor instance the task of inspecting the stability of a building in case of an earthquake.However, even in cases where an experiment could potentially be performed, e.g. in thedevelopment of a new product, it often makes sense to run a simulation instead of thereal-world experiment in order to reduce development cost and/or time.Another ongoing trend is the virtualization of environments as can be seen for examplein the area of navigation or internet shopping. A digital geometry representation enablesthe user to thoroughly explore a possibly faraway object not only from pre-chosen viewsbut in its full variety. Moreover a digitalized environment oers the powerful possibilityof interactively visualizing additional data which is designed to support the desired applicationas for instance overblended signs in a navigation software.One step further, instead of replicating and enriching the real world in a digital environment,designers, artists or engineers are able to utilize the enormous potential oftoday's 3D modeling environments to create new complex objects or sometimes even completely artificial worlds as for example in animation movies.Motivated by the huge amount of applications there is a long history of different digitalgeometry representations which were used in the past. Some applications require asolid (volumetric) representation of the object while for others it is sufficient to solelyrepresent its boundary, i.e. the surface of the object. In this thesis we will focus onsurface representations while an outlook on the analog volumetric problem will be given
  • Item
    Perceptual Display: Exceeding Display Limitations by Exploiting the Human Visual System
    (Didyk, Piotr, 2012-08-20) Didyk, Piotr
    Existing displays have a number of limitations, which make it difficult to realistically reproduce real-world appearance; discrete pixels are used to represent images, which are refreshed only a limited number of times per second, the output luminance range is much smaller than in the real world, and only two dimensions are available to reproduce a three-dimensional scene. While in some cases technology advanced and higher frame rates, higher resolution, higher luminance, and even disparity-based stereo is possible, these solutions are often costly and, further, it is challenging to produce adequate content.On the other hand, the human visual system has certain limitations itself, such as the density of photoreceptors, imperfections in the eye optics, or the limited ability to discern high-frequency information. The methods presented in this dissertation show that taking these properties into account can improve the efficiency and perceived quality of displayed imagery. More precisely, those techniques make use of perceptual effects, which are not measurable physically, that will allow us to overcome the physical limitations of display devices in order to enhance apparent image qualities.
  • Item
    Non-Uniform Deformable Volumetric Objects for Medical Organ Segmentation and Registration
    (Erdt, 2012-06-13) Erdt, Marius
    In medical imaging, large amounts of data are created during each patient examination, especially using 3-dimensional image acquisition techniques such as Computed Tomography. This data becomes more and more difficult to handle by humans without the aid of automated or semi-automated image processing means and analysis. Particularly, the manual segmentation of target structures in 3D image data is one of the most time consuming tasks for the physician in the context of using computerized medical applications. In addition, 3D image data increases the difficulty of mentally comparing two different images of the same structure. Robust automated organ segmentation and registration methods are therefore needed in order to fully utilize the potentials of modern medical imaging.
  • Item
    Signal processing methods for beat tracking, music segmentation, and audio retrieval
    (Grosche, Peter, 2012-11-09) Grosche, Peter
    The goal of music information retrieval (MIR) is to develop novel strategies and techniques for organizing, exploring, accessing, and understanding music data in an efficient manner. The conversion of waveform-based audio data into semantically meaningful feature representations by the use of digital signal processing techniques is at the center of MIR and constitutes a difficult field of research because of the complexity and diversity of music signals. In this thesis, we introduce novel signal processing methods that allow for extracting musically meaningful information from audio signals. As main strategy, we exploit musical knowledge about the signals' properties to derive feature representations that show a significant degree of robustness against musical variations but still exhibit a high musical expressiveness. We apply this general strategy to three different areas of MIR: Firstly, we introduce novel techniques for extracting tempo and beat information, where we particularly consider challenging music with changing tempo and soft note onsets. Secondly, we present novel algorithms for the automated segmentation and analysis of folk song field recordings, where one has to cope with significant fluctuations in intonation and tempo as well as recording artifacts. Thirdly, we explore a cross-version approach to content-based music retrieval based on the query-by-example paradigm. In all three areas, we focus on application scenarios where strong musical variations make the extraction of musically meaningful information a challenging task.
  • Item
    Topological analysis of discrete scalar data
    (Günther, 2012-12-18) Günther, David
    This thesis presents a novel computational framework that allows for a robust extraction and quantification of the Morse-Smale complex of a scalar field given on a 2- or 3- dimensional manifold. The proposed framework is based on Forman's discrete Morse theory, which guarantees the topological consistency of the computed complex. Using a graph theoretical formulation of this theory, we present an algorithmic library that computes the Morse-Smale complex combinatorially with an optimal complexity of O(n^2) and efficiently creates a multi-level representation of it. We explore the discrete nature of this complex, and relate it to the smooth counterpart. It is often necessary to estimate the feature strength of the individual components of the Morse-Smale complex -- the critical points and separatrices. To do so, we propose a novel output-sensitive strategy to compute the persistence of the critical points. We also extend this wellfounded concept to separatrices by introducing a novel measure of feature strength called separatrix persistence. We evaluate the applicability of our methods in a wide variety of application areas ranging from computer graphics to planetary science to computer and electron tomography.
  • Item
    Practical Real-Time Strategies for Photorealistic Skin Rendering and Antialiasing
    (Jimenez, 2012-07-13) Jimenez, Jorge
    The first topic of this thesis, photorealistic skin rendering, is of extreme importance to create believable special effects in cinematography,but it has been ignored to an extent by the game industry. However, in the last years, there is a trend towardsmore character-driven, film-like games. This has risen the interest towards photorealisticallydepicting human characters. In contrast with the offline rendering technology used for films, where hours andhours can be spent for rendering, in the practical real-time realm, the time alloted for skin rendering is inthe millisecond range (with even further constraints in games). Our challenge is, then, to match film-qualityskin rendering in runtimes of orders of smaller magnitude, taking into account very fine qualities of the skinappearance, including SSS, facial color changes and wrinkles animation.
  • Item
    Automated methods for audio-based music analysis with applications to musicology
    (Konz, 2012-11-09l) Konz, Verena
    This thesis contributes to bridging the gap between music information retrieval (MIR) and musicology. We present several automated methods for music analysis, which are motivated by concrete application scenarios being of central importance in musicology. In this context, the automated music analysis is performed on the basis of audio material. Here, one reason is that for a given piece of music usually many different recorded performances exist. The availability of multiple versions of a piece of music is exploited in this thesis to stabilize analysis results. We show how the presented automated methods open up new possibilities for supporting musicologists in their work. Furthermore, we introduce novel interdisciplinary concepts which facilitate the collaboration between computer scientists and musicologists. Based on these concepts, we demonstrate how MIR researchers and musicologists may greatly benefit from each other in an interdisciplinary collaboration. Firstly, we present a fully automatic approach for the extraction of tempo parameters from audio recordings and show to which extent this approach may support musicologists in analyzing recorded performances. Secondly, we introduce novel user interfaces which are aimed at encouraging the exchange between computer science and musicology. In this context, we indicate the potential of computer-based methods in music education by testing and evaluating a novel MIR user interface at the University of Music Saarbr ücken. Furthermore, we show how a novel multi-perspective user interface allows for interactively viewing and evaluating version-dependent analysis results and opens up new possibilities for interdisciplinary collaborations. Thirdly, we present a cross-version approach for harmonic analysis of audio recordings and demonstrate how this approach enables musicologists to explore harmonic structures even across large music corpora. Here, one simple yet important conceptual contribution is to convert the physical time axis of an audio recording into a performance-independent musical time axis given in bars.
  • Item
    Intrinsic image decomposition from multiple photographs
    (Laffont Pierre-Yves, 2012-10-12) Laffont, Pierre-Yves
    Editing materials and lighting is a common image manipulation task that requires significantexpertise to achieve plausible results. Each pixel aggregates the effect of both materialand lighting, therefore standard color manipulations are likely to affect both components.Intrinsic image decomposition separates a photograph into independent layers: reflectance,which represents the color of the materials, and illumination, which encodes the effect oflighting at each pixel.In this thesis, we tackle this ill-posed problem by leveraging additional information providedby multiple photographs of the scene. We combine image-guided algorithms withsparse 3D information reconstructed from multi-view stereo, in order to constrain the decomposition.We first present an approach to decompose images of outdoor scenes, using photographscaptured at a single time of day. This method not only separates reflectance from illumination,but also decomposes the illumination into sun, sky, and indirect layers. We thendevelop a methodology to extract lighting information about a scene solely from a few images,thus simplifying the capture and calibration steps of our intrinsic decomposition. In athird part, we focus on image collections gathered from photo-sharing websites or capturedwith a moving light source. We exploit the variations of lighting to process complex sceneswithout user assistance, nor precise and complete geometry.The methods described in this thesis enable advanced image manipulations such aslighting-aware editing, insertion of virtual objects, and image-based illumination transferbetween photographs of a collection.
  • Item
    Computergraphics and Nature
    (Neubert, 2012-05-07) Neubert, Boris
    <p>This thesis presents new methods for modeling and efficient rendering of botanical scenes and objects. The first methodallows for producing 3D tree models from a set of images with limited user intervention by combining principles ofimage- and simulation-based modeling techniques. The image information is used to estimate an approximate voxelbasedtree volume. Density values of the voxels are used to produce initial positions for a set of particles. Performing a3D flow simulation, the particles are traced downwards to the tree basis and are combined to form twigs and branches.If possible, the trunk and the first-order branches are determined in the input photographs and are used as attractorsduring the particle simulation. Different initial particle positions result in a variety, yet similar-looking branchingstructures for a single set of photographs. The guided particle simulation meets two important criteria improvingcommon modeling techniques: it is possible to achieve a high visual similarity to photographs and at the same timeallows for simple manipulations of the resulting plant by altering the input photographs and changing the shape ordensity, providing the artist with an expressive tool while leveraging the need for manual modeling plant details.Following paper based on guided particle simulations coined the term self-organizing tree models.The second method improves the concept of sketch-based modeling tools for plants. The proposed system convertsa freehand sketch of a tree drawn by the user into a full 3D model that is both, complex and realistic-looking. This isachieved by probabilistic optimization based on parameters obtained from a database of tree models. Branch interactionis modeled by a Markov random field, which allows for inferring missing information of the tree structure and combiningsketch-based and data-driven methodologies. The principle of self-similarity is exploited to add new branchesbefore populating all branches with leaves. </p><p>Both modeling methods presented in this work, produce very complex tree models. While this richness is neededto model highly realistic scenes, it leads to a complexity that makes real-time rendering impossible. We presentan optimized pruning algorithm that considerably reduces the geometry needed for large botanical scenes, whilemaintaining high and coherent rendering quality. We improve upon previous techniques by applying model-specificgeometry reduction functions and optimized scaling functions. We propose the use of Precision and Recall (PR) asa measure of quality to rendering and show how PR-scores can be used to predict better scaling values. To verifythe measure of quality we conducted a user-study allowing subjects to adjust the scaling value, which shows that thepredicted scaling matches the preferred ones. Finally, we extend the originally purely stochastic geometry prioritizationfor pruning in order to account for a view-optimized geometry selection, which allows to take global scene information,such as occlusion, into consideration. We demonstrate our method for the rendering of scenes with thousands ofcomplex tree models in real-time.</p>
  • Item
    Optimization Techniques For ComputationallyExpens Ive Rendering Algorithms
    (Navarro Gil, 2012-04-01) Navarro Gil, Luis Fernando
    This thesis focuses on a group of rendering methods known by their high computational requirements. We analyse them in detail and reduce their cost using a set of conceptually different approaches.We first focus on rendering time-varying participating media. We propose a modified formulation of the rendering equation and implement several optimizations to the ray marching algorithm. Our GPU based framework can generate photo-realistic images using high dynamic range lighting at interactive rates.We also analyse two different aspects of the generation of antialiased images.The first one is targeted to rendering screen-space anti-aliasing and reducing image artifacts. We propose a real time implementation of the morphological antialiasing algorithm that is efficient to evaluate, has a moderate impact and can be easily integrated into existing pipelines.The final part of the thesis takes a radically different approach and studies the responses of the Human Visual System to motion blurred stimuli. Using psychophysical experiments, we analyse the limits with respect to the perception of temporally antialiased images.Results, both for standard sequences and stereoscopic footage, sug- gest that human observers have notable tolerance to image artifacts like strobbing, excessive blur and noise. In some cases, images ren- dered with low quality settings may be indistinguishable from a gold standard. Based on these insights, we provide examples of how render settings can be used to reduce computation times without degradation of visual quality.In summary, this thesis describes novel algorithmic optimizations as well as introduces aspects related to human perception that can be leveraged to design more efficient rendering methods.
  • Item
    Meshless sampling and reconstruction of manifolds and patterns
    (Öztireli, 2013) Oeztireli, A. Cengiz
    It is one of the main goals of Computer Graphics in particular, and science ingeneral, to understand and mimic the complexity in the real world. Over the pastdecades, it has been proven that the mathematical structures of manifolds, andstochastic point patterns result in accurate and efficient computational representationsfor the geometric complexity of the world, and modeling these structureswith meshless methods offers a versatile and unified treatment. In this thesis,we develop techniques and algorithms to tackle the fundamental problems ofmeshless sampling and reconstruction of manifolds and point patterns.The acquired data sampled from manifold surfaces of objects is often noisy, corruptedwith outliers, and sparse in some parts of the surface. It is thus very challengingto generate accurate reconstructions of the underlying surface. The first problemwe address is the generation of robust, and sharp feature and high frequency detailpreserving reconstructions of point sampled manifolds. Due to the commonsmoothness assumption, most approximation methods, when directly applied tothe manifold surface reconstruction problem, can only generate smooth surfaceswithout such features, and are significantly affected by outliers. We propose toreformulate the moving least squares based point set surface reconstruction methodsin the framework of local kernel regression, which enables us to incorporate methodsfrom robust statistics to arrive at a feature preserving and robust point set surfacedefinition. The new implicit surface definition can preserve fine details and alltypes of sharp features with controllable sharpness, has a simple analytic form,is robust to outliers and sparse sampling, and efficient and simple to compute.Since the definition is continuous, it is amenable to further processing without anyspecial treatment.The accuracy of the reconstruction of a surface is necessarily determined by thedensity and distribution of the points sampled from it. It is thus essential to ensurea dense enough sampling for faithful reconstructions. On the other hand, typicaldatasets can be massive and redundant in some parts with billions of points, whichsignificantly degrades the performance of the reconstruction algorithms. Hence,finding optimal sampling conditions for a given reconstruction method is essential forefficient and accurate reconstructions. We propose new simplification and resamplingalgorithms that result in accurate reconstructions while minimizing redundancy.The algorithms are out-of-core, efficient, simple to implement, feature sensitive,iiiand generate high quality blue noise distributions. They utilize a new measurethat quantifies the effect a point has on the definition of a manifold, if it is addedto the set defining the manifold, by considering the change in the Laplace-Beltramispectrum. We derive an approximation of this measure by a novel technique thatcombines spectral analysis of manifolds and kernel methods. Although the measure isconceptually global, it requires only local computations, making the algorithmstime and memory efficient.Not all structures of the real world admit a deterministic manifold form. Indeed,many structures, from the distribution of trees in a forest or pores in a piece of Swisscheese to those of molecular particles or movements of humans in a crowd are bestmodeled in a distributional sense by stochastic point patterns. Reconstruction of suchpatterns from given example distributions is thus of utmost importance. To achievethis, we first propose a new unified analysis of point distributions based on a kernelbased approximation of the pair correlation function (PCF). This analysis shows thatthe PCF is sufficient for unique determination and discrimination of most pointpatterns, and that there is a quantifiable relation between patterns depending ona new measure of their irregularity. Following this analysis, we propose the firstalgorithms that can synthesize point distributions with characteristics matchingthose of provided examples, by minimizing a certain distance between the PCFs.Our first algorithm is a generalized dart throwing method that accepts or rejectsadded points depending on the PCF. The second gradient descent based algorithmtakes the output of the first algorithm, and moves the points so as to minimize thedistance between the target PCF and the PCF of the final output point set. Theresulting point distributions have the characteristics of the target patterns to bereconstructed.iv
  • Item
    The Intrinsic Shape of Point Clouds
    (Ohrhallinger, 2012-07-12) Ohrhallinger, Stefan
    Given a point cloud, in the form of unorganized points, the problem of automatically connecting the dots to obtain an aesthetically pleasing and piecewise-linear closed interpolating boundary shape has been extensively researched for over three decades. In R3 , it is even more complicated to find an aesthetic closed oriented surface. Most previous methods for shape reconstruction exclusively from coordinates work well only when the point spacing on the shape boundary is dense and locally uniform. The problem of shape construction from non-dense and locally non-uniformly spaced point sets is in our opinion not yet satisfactorily solved. Various extensions to earlier methods do not work that well and do not provide any performance guarantees either. Our main thesis in this research is that a point set, even with non-dense and locally non-uniform spacing, has an intrinsic shape which optimizes in some way the Gestalt principles of form perception. This shape can be formally defined as the minimum of an energy function over all possible closed linear piece-wise interpolations of this point set. Further, while finding this optimal shape is NP-hard, it is possible to heuristically search for an acceptable approximation within reasonable time. Our minimization objective is guided by Gestalt s laws of Proximity, Good Continuity and Closure. Minimizing curvature tends to satisfy proximity and good continuity. For computational simplification, we globally minimize the longest-edge-in-simplex, since it is intrinsic to a single facet and also a factor in mean curvature. And we require a closed shape. Using such an intrinsic criterion permits the extraction of an approximate shape with a linearithmic algorithm as a simplicial complex, which we have named the Minimum Boundary Complex. Experiments show that it seems to be a very close approximation to the desired boundary shape and that it retains its genus. Further it can be constructed locally and can also handle sensor data with significant noise. Its quick construction is due to not being restricted by the manifold property, required in the boundary shape. Therefore it has many applications where a manifold shape is not necessary, e.g. visualization, shape retrieval, shadow mapping, and topological data analysis in higher dimensions. The definition of the Minimum Boundary Complex is our first major contribution. Our next two contributions include new methods for constructing boundary shapes by transforming the boundary complex into a close approximation of the minimum boundary shape. These algorithms vary a topological constraint to first inflate the boundary complex to recover a manifold hull and then sculpture it to extract a Minimum Boundary approximation, which interpolates all the points. In the R3 method, we show how local minima can be avoided by covering holes in the hull. Finally, we apply a mesh fairing step to optimize mean curvature directly. We present results for shape construction in R2 and R3 , which clearly demonstrate that our methods work better than the best performing earlier methods for non-dense and locally non-uniformly spaced point sets, while maintaining competitive linearithmic complexity.
  • Item
    Real-time Illustrative Visualization of Cardiovascular Hemodynamics
    (Van Pelt, 13-06-2012) Van Pelt, Roy F. P.
    Modern magnetic resonance imaging techniques enable acquisition of time-resolved volumetric blood-flow velocity data. With these data, physicians aim for newfound insight into the intricate blood-flow dynamics. This conceivably leads to improved diagnosis and prognosis of cardiovascular diseases, as well as a better assessment of treatments and risks. We facilitate the time-consuming and challenging process of visual analysis of these unsteady, multi-dimensional and multi-valued data by comprehensive exploratory visualization techniques, tailored to communicate blood flow in the heart and the thoracic arteries. We introduce abstraction techniques to reduce the abundance of information contained in the data. Interactive exploration is enabled by probing tools, selecting regions-of-interest that serve as a basis for our real-time illustrative visualizations. Based on evaluation studies with the involved physicians, we believe that real-time visual exploration of blood-flow data facilitates the qualitative analysis.
  • Item
    From irregular meshes to structured models
    (Panozzo, 2012-05-07) Panozzo, Daniele
    Surface manipulation and representation is becoming increasingly important, with applications ranging from special effects for films and video-games to physical simulation on the hulls of airplanes. Much research has been done to understand surfaces and to provide practical and theoretical tools suitable for acquiring, designing, modeling and rendering them. This thesis contributes to fill the gap that exists between acquisition of surfaces from 3D scanners and their use in modeling. The problem has been studied from different perspectives, and our contributions span the entire modeling pipeline, from decimation and parametrization to interactive modeling. First and foremost, we propose an automatic approach that converts a surface - represented as a triangle mesh - to a base domain for the definition of a higher order surface. This allows us to have the advantages of a structured base domain, without the need of defining it by hand. The algorithm performs a series of local operations on the provided triangulation to transform it into a coarse quad mesh, minimizing in a greedy way a functional that keeps the newly computed smooth surface as close as possible to the original triangle mesh.The same problem is also approached from a different angle, by proposing an algorithm that computes a global parametrization of the surface, using an automatically costructed abstract mesh as domain. The problems are related because whenever a global parametrization of a surface is known, it is possible to produce a quad mesh by imposing a regular grid over the parametrization domain, which is usually a plane or a collection of planes, and mapping it to the surface using the parametrization itself. It is then possible to use surface fitting methods to convert the quad mesh to a base domain for a high-order surface. Our contribution is an algorithm that is able to start from a cross-field defined on a surface, simplify its topology and then use it to compute a global parametrization that is especially suitable for re-meshing purposes. It is also possible to use it for other usual applications of a parametrization, like texturing or non-photorealistic rendering. Since most objects in the real-world are symmetric, we studied robust methods to extract the symmetry map from acquired models. For extrinsic symmetries, we propose a simple and fully automatic method based on invariants usually used for image analysis. For intrinsic symmetries, we introduce a novel topological definition of symmetry and a novel algorithm that starting from a few correspondences is able to extract a high-quality symmetry map for the entire shape. The extracted symmetric map is then used to produce symmetric remeshing of existing models, as well as symmetric non-photorealistic rendering and parametrization.We also introduce an innovative parametrization algorithm for the special case of mapping a rectangular subset of the plane to another subset of different size. This case is of special interest for the task of interactive image retargeting, where the aspect ratio of an image is changed without distorting the content in interesting areas. Our algorithm searches for the parametrization function in the restricted subset of axis-aligned deformations, by minimizing a convex functional. This allows us to achieve robustness and real-time performances even on mobile devices with low processing power. A user-study with 305 participants shows that our method produces high-quality results.Starting from a structured model, we consider the problem of refining it in an adaptive way. We present a way to encode an arbitrary subdivision hierarchy in an implicit way, requiring an amount of additional space that is negligible with respect to the size of the mesh. The core idea is general, and we present two different instantiations, one for triangle and one for quad meshes. In both cases, we discuss how they can be implemented on top of well-known data structures and we introduce the concept of topological angles, that allows to efficiently navigate in the implicit hierarchy. Our adaptive framework can be used to define adaptive subdivision surfaces and for generating semi-regular remeshing of a given surface.Finally, we extend common geometric modeling algorithms to prevent intersections. We show that it is possible to extend them to produce interesting deformations, which depend on the modeling algorithm used, to avoid self-intersections during interactive modeling. Self-intersections are a common problem, since they usually represent unrealistic scenarios and if a mesh contains intersections it is hard to run any kind of physical simulation on it. It is thus impossible to realistically model clothes or hair on self-intersecting meshes, and the manual cleaning of these models is time-consuming and error-prone. Our proposal allows us to produce models with the guarantee that self-intersections cannot appear and can be easily integrated into existing modeling software systems.
  • Item
    Real-Time Geometry Decompression on Graphics Hardware
    (Dr. Hut Verlag, 2012-08-01) Meyer, Quirin
    Real-Time Computer Graphics focuses on generating images fast enough to cause the illusion of a continuous motion. It is used in science, engineering, computer games, image processing, and design. Special purpose graphics hardware, a so-called graphics processing unit (GPU), accelerates the image generation process substantially. Therefore, GPUs have become indispensable tools for Real-Time Computer Graphics. The purpose of GPUs is to create two-dimensional (2D) images from threedimensional (3D) geometry. Thereby, 3D geometry resides in GPU memory. However, the ever increasing demand for more realistic images constantly pushes geometry memory consumption. This makes GPU memory a limiting resource in many Real-Time Computer Graphics applications. An effective way of getting more geometry into GPU memory is to compress geometry.In this thesis, we introduce novel algorithms for compressing and decompressing geometry. We propose methods to compress and decompress 3D positions, 3D unit vectors, and topology of triangle meshes. Thereby, we obtain compression ratios from 2:1 to 26:1. We focus on exploiting the high degree of parallelism available on GPUs for decompression. This allows our decompression techniques to run in real-time and impact rendering speed only little. At the same time, our techniques achieve high image quality: images, generated from compressed geometry, are visually indistinguishable from images generated from non-compressed geometry. Moreover, our methods are easy to combine with existing rendering techniques. Thereby, a wide range of applications may benefit from our results.
  • Item
    Colour videos with depth: acquisition, processing and evaluation
    (Richardt, 2012-02-21) Richardt, Christian
    <p>The human visual system lets us perceive the world around us in three dimensions by integrating evidence from depth cues into a coherent visual model of the world. The equivalent in computer vision and computer graphics are geometric models, which provide a wealth of information about represented objects, such as depth and surface normals. Videos do not contain this information, but only provide per-pixel colour information. In this dissertation, I hence investigate a combination of videos and geometric models: videos with per-pixel depth (also known as RGBZ videos). I consider the full life cycle of these videos: from their acquisition, via filtering and processing, to stereoscopic display.</p><p>I propose two approaches to capture videos with depth. The first is a spatiotemporal stereo matching approach based on the dual-cross-bilateral grid – a novel real-time technique derived by accelerating a reformulation of an existing stereo matching approach. This is the basis for an extension which incorporates temporal evidence in real time, resulting in increased temporal coherence of disparity maps – particularly in the presence of image noise.</p><p>The second acquisition approach is a sensor fusion system which combines data from a noisy, low-resolution time-of-flight camera and a high-resolution colour video camera into a coherent, noise-free video with depth. The system consists of a three-step pipeline that aligns the video streams, efficiently removes and fills invalid and noisy geometry, and finally uses a spatiotemporal filter to increase the spatial resolution of the depth data and strongly reduce depth measurement noise.</p><p>I show that these videos with depth empower a range of video processing effects that are not achievable using colour video alone. These effects critically rely on the geometric information, like a proposed video relighting technique which requires high-quality surface normals to produce plausible results. In addition, I demonstrate enhanced non-photorealistic rendering techniques and the ability to synthesise stereoscopic videos, which allows these effects to be applied stereoscopically.</p><p>These stereoscopic renderings inspired me to study stereoscopic viewing discomfort. The result of this is a surprisingly simple computational model that predicts the visual comfort of stereoscopic images. I validated this model using a perceptual study, which showed that it correlates strongly with human comfort ratings. This makes it ideal for automatic comfort assessment, without the need for costly and lengthy perceptual studies.</p><p>RGBZ videos, temporally coherent stereo matching, time-of-flight sensor fusion, stereoscopic viewing comfort</p>
  • Item
    Algorithms for 3D Isometric Shape Correspondence - Algorithms for 3D Isometric Shape Correspondence
    (Sahillioglu, 2012-08-01) Sahillioglu, Yusuf
    There are many pairs of objects in the digital world that need to be related before performing any comparison, transfer, or analysis in between. The shape correspondence algorithms essentially address this problem by taking two shapes as input with the aim of finding a mapping that couples similar or semantically equivalent surface points of the given shapes. We focus on computing correspondences between some featured or all present points of two semantically similar 3D shapes whose surfaces overlap completely or partially up to isometric, i.e., distance-preserving, deformations and scaling. Differently put, our isometric shape correspondence algorithms handle several different cases for the shape correspondence problem that can be differentiated based on how similar the shape pairs are, whether they are partially overlapped, the resolution of the desired mapping, etc. Although there exist methods that can, in most cases, satisfactorily establish 3D correspondences between two given shapes, these methods commonly suffer from certain drawbacks such as high computational load, incapability of establishing a correspondence which is partial and dense at the same time, approximation and embedding errors, and confusion of symmetrical parts of the shapes. While the existing methods constitute a solid foundation and a good starting point for the shape correspondence problem, our novel solutions designed for a given scenario achieve significant improvements as well as contributions. We specifically explore the 3D shape correspondence problem under two categories as complete and partial correspondences where the former is categorized further according to the output resolution as coarse and dense correspondences. For complete correspondence at coarse resolution, after jointly sampling evenly-spaced feature vertices on shapes, we formulate the problem as combinatorial optimization over the domain of all possible mappings between source and target features, which then reduces within a probabilistic framework to a log-likelihood maximization problem that we solve via EM (Expectation Maximization) algorithm. Due to computational limitations of this approach, we design a fast coarse-to-fine algorithm to achieve dense correspondence between all vertices of complete models with specific care on the symmetric flip issue. Our scale normalization method based on a novel scale-invariant isometric distortion measure, on the other hand, handles a particular and rather restricted setting of partial matching whereas our rank-and-vote-and-combine (RAVAC) algorithm deals with the most general matching setting, where both two solutions produce correspondences that are partial and dense at the same time. In comparison with many state-of-the-art methods, our algorithms are tested by a variety of two-manifold meshes representing 3D shape models based on real and synthetic data.
  • Item
    Non-Periodic Corner Tilingsin Computer Graphics
    (Schlömer, 2012-11-16) Schlömer, Thomas
    Rendering computer-generated images is both memory and runtime intensive.This is particularly true in realtime computer graphics where large amounts ofcontent have to be produced very quickly and from limited data. Tile-basedmethods offer a solution to this problem by generating large portions of a specificcontent out of a much smaller data set of tiles.This dissertation investigates the use of corner tiles for this purpose unitsquare tiles with color-coded corners. They tile the plane by placing them withoutgaps or overlaps such that tiles have matching corner colors. We presentefficient algorithms to perform such a tiling that are both more flexible and lessprone to artifacts than existing algorithms. We also present solutions to combinatorialproblems that arise when using corner tiles, and introduce high-qualitymethods to perform the tile-based generation of two fundamental componentsof any rendering system: textures and two-dimensional sample point sets.The results of this dissertation are advantageous for both realtime and offlinerendering systems where they improve state-of-the-art results in texture synthesis,image plane sampling, and lighting computations based on numerical integration.
  • Item
    Perception-Augmenting Illumination
    (Solteszova, 2012-10-04) Solteszova, Veronika
    At each stage of the visualization pipeline, the information is impeded by loss or by noise because of imprecise acquisition, storage limitations, and processing. Furthermore, it passes through the complex and not yet well understood pathways in the human visual system and finally to result into a mental image. Due to the noise that impedes the information in the visualization pipeline and the processes in the human visual system, the mental image and the real-world phenomenon do not match. From the aspect of physics, the input of the visual system is confined only to patterns of light. Illumination is therefore essential in 3D visualization for perception of visualized objects.In this thesis, several advancements for advanced volumetric lighting are presented. First, a novel lighting model that supports interactive light source placement and yields a high-quality soft shadowing effect, is proposed. The light transport is represented by conical functions and approximated with an incremental blurring operation of the opacity buffer during front-to-back slicing of the volume. Furthermore, a new perceptually-founded model for expressing shadows that gives a full control over the appearance of shadows in terms of color and opacity, is presented. Third, a systematic error in perception of surface slant is modeled. This knowledge is then applied to adjust an existing shading model in a manner that compensates for the error in perception. These new visualization methodologies are linked to the knowledge of perceptual psychology and the craft of illustrators, who experimented with visual-presentation techniques for centuries. The new methodologies are showcased on challenging acoustic modalities such as 3D medical ultrasound and sonar imaging.