2018

Permanent URI for this collection


Visual Analytics to Support Evidence-Based Decision Making

Ruppert, Tobias

Automatic Optimization of 3D Mesh Data for Real-Time Online Presentation

Limper, Max Alfons

The Secret of Appeal - Understanding Perception of Realistic and Stylized Faces

Zell, Eduard

Interfaces and Boundaries in Physics Based Simulation of Solids and Fluids

Koschier, Dan Alexander

Local Geometry Processing for Deformations of Non-Rigid 3D Shapes

Melzi, Simone

Real-Time Generative Hand Modeling and Tracking

Tkach, Anastasia

Computational Design of Flexible Structures

Pérez Rodríguez, Jesús

Eye Reconstruction and Modeling for Digital Humans

Bérard, Pascal

Scalable exploration of 3D massive models

Jaspe Villanueva, Alberto

The high dynamic range imaging pipeline: Tone-mapping, distribution, and single-exposure reconstruction

Eilertsen, Gabriel

On-site Surface Reflectometry

Riviere, Jérémy

Efficient Methods for Computational Light Transport

Marco, Julio

Layered Models for Large Scale Time-Evolving Landscapes

Cordonnier, Guillaume

Shape Processing for Content Generation

Schinko, Christoph

Generative Methods for Data Completion in Shape Driven Systems

Krispel, Ulrich


Browse

Recent Submissions

Now showing 1 - 15 of 15
  • Item
    Visual Analytics to Support Evidence-Based Decision Making
    (TU Darmstadt (TUPrints), 2018) Ruppert, Tobias
    The aim of this thesis is the design of visual analytics solutions to support evidence-based decision making. Due to the ever-growing complexity of the world, strategical decision making has become an increasingly challenging task. At the business level, decisions are not solely driven by economic factors anymore. Environmental and social aspects are also taken into account in modern business decisions. At the political level, sustainable decision making is additionally influenced by the public opinion, since politicians target the conservation of their power. Decision makers face the challenge of taking all these factors into consideration and, at the same time, of increasing their efficiency to immediately react on abrupt changes in their environment. Due to the digitization era, large amounts of data are digitally stored. The knowledge hidden in these datasets can be used to address the mentioned challenges in decision making. However, handling large datasets, extracting knowledge from them, and incorporating this knowledge into the decision making process poses significant challenges. Additional complexity is added by the varying expertises of stakeholders involved in the decision making process. Strategical decisions today are not solely made by individuals. In contrast, a consortium of advisers, domain experts, analysts, etc. support decision makers in their final choice. The amount of involved stakeholders bears the risk of hampering communication efficiency and effectiveness due to knowledge gaps coming from different expertise levels. Information systems research has reacted to these challenges by promoting research in computational decision support systems. However, recent research shows that most of the challenges remain unsolved. During the last decades, visual analytics has evolved as a research field for extracting knowledge from large datasets. Therefore, combining human perception capabilities and computers’ processing power offers great analysis potential, also for decision making. However, despite obvious overlaps between decision making and visual analytics, theoretical foundations for applying visual analytics to decision making have been missing. In this thesis, we promote the augmentation of decision support systems with visual analytics. Our concept comprises a methodology for the design of visual analytics systems that target decision making support. Therefore, we first introduce a general decision making domain characterization, comprising the analysis of potential users, relevant data categories, and decision making tasks to be supported with visual analytics technologies. Second, we introduce a specialized design process for the development of visual analytics decision support systems. Third, we present two models on how visual analytics facilitates the bridging of knowledge gaps between stakeholders involved in the decision making process: one for decision making at the business level and one for political decision making. To prove the applicability of our concepts, we apply our design methodology in several design studies targeting concrete decision making support scenarios. The presented design studies cover the full range of data, user, and task categories characterized as relevant for decision making. Within these design studies, we first tailor our general decision making domain characterization to the specific domain problem at hand. We show that our concept supports a consistent characterization of user types, data categories and decision making tasks for specific scenarios. Second, each design study follows the design process presented in our concept. And third, the design studies demonstrate how to bridge knowledge gaps between stakeholders. The resulting visual analytics systems allow the incorporation of knowledge extracted from data into the decision making process and support the collaboration of stakeholders with varying levels of expertises.
  • Item
    Automatic Optimization of 3D Mesh Data for Real-Time Online Presentation
    (2018-06-05) Limper, Max Alfons
    Interactive 3D experiences are becoming increasingly available as a part of our every-day life. Examples are ranging from common video games to virtual reality experiences and augmented reality apps on smart phones. A rapidly growing area are interactive 3D applications running inside common Web browsers, enabling to serve millions of users worldwide using solely standard Web technology. However, while Web-based 3D presentation technology is getting more and more advanced, a crucial problem that remains is the optimization of 3D mesh data, such as highly detailed 3D scans, for efficient transmission and online presentation. In this context, the needfordedicated3Dexperts,beingabletoworkwithvariousspecializedtools,significantlylimitsthescalability of 3D optimization workflows in many important areas, such as Web-based 3D retail or online presentation of cultural heritage. Moreover, since Web-based 3D experiences are nowadays ubiquitous, an optimal delivery format must work well on a wide range of possible client devices, including tablet PCs and smart phones, while still offering acceptable compression rates and progressive streaming. Automatically turning high-resolution 3D meshesintocompact3Drepresentationsforonlinepresentations,usinganefficientstandardformatforcompression and transmission, is therefore an important key challenge, which remained largely unsolved so far. Within this thesis, a fully-automated pipeline for appearance-preserving optimization of 3D mesh data is presented, enabling direct conversion of high-resolution 3D meshes to an optimized format for real-time online presentation. The first part of this thesis discusses 3D mesh processing algorithms for fully-automatic optimization of 3D mesh data, including mesh simplification and texture mapping. In this context, a novel saliency detection method for mesh simplification is presented, as well as a new method for automatic overlap removal in parameterizations using cuts with minimum length and, finally, a method to compact texture atlases using a cut-and-repack strategy. The second part of the thesis deals with the design of an optimized format for 3D mesh data on the Web. It covers various relevant aspects, such as efficient encoding of mesh geometry and mesh topology, a physically-based format for material data, and progressive streaming of textured triangle meshes. The contributions made in this context during the creation of this thesis had notable impact on the design of the current standard format for 3D mesh data on the Web, glTF 2.0, which is nowadays supported by the vast majority of online 3D viewers.
  • Item
    The Secret of Appeal - Understanding Perception of Realistic and Stylized Faces
    (Verlag Dr. Hut, 2018-07-16) Zell, Eduard
    Stylized characters are highly used in movies and games. Furthermore, stylization is mostly preferred over realism for the design of toys and social robots. However, the design process remains highly subjective because the influence of possible design choices on character perception is not well understood. Investigating the high-dimensional space of character stylization by means of perception experiments is difficult because creating and animating compelling characters of different stylization levels remains a challenging task. In this context, computer graphics algorithms enable the creation of highly controllable stimuli, simplifying examination of specific features that can strongly influence the overall perception of a character. This thesis is separated into two parts. First, a pipeline is presented for creating virtual doubles of real people. In addition, algorithms are described suitable for the transfer of surface properties and animation between faces of different stylization levels. With ElastiFace, a simple and versatile method is introduced for establishing dense correspondences between textured face models. The method extends non-rigid registration techniques to allow for strongly varying input geometries. The technical part closes with an algorithm that addresses the problem of animation transfer between faces. Such facial retargeting frameworks consist of a pre-processing step, where blendshapes are transferred from one face to another. By exploring the similarities between an expressive training sequence of an actor and the blendshapes of a facial rig to be animated, the accuracy of transferring the blendshapes to actor's proportions is highly improved. Consequently, this step overall enhances the reliability and quality of facial retargeting. The second part covers two different perception studies with stimuli created by using the previously described pipeline and algorithms. Results of both studies improve the understanding of the crucial factors for creating appealing characters across different stylization levels. The first study analyzes the most influential factors that define a character's appearance by using rating scales in four different perceptual experiments. In particular, it focuses on shape and material but considers as well shading, lighting and albedo. The study reveals that shape is the dominant factor when rating expression intensity and realism, while material is crucial for appeal. Furthermore, the results show that realism alone is a bad predictor for appeal, eeriness, or attractiveness. The second study investigates how various degrees of stylization are processed by the brain using event-related potentials (ERPs). Specifically, it focuses on the N170, early posterior negativity (EPN), and late positive potential (LPP) event-related components. The face-specific N170 shows a u-shaped modulation, with stronger reactions towards both, most abstract and most realistic compared to medium-stylized faces. In addition, LPP increases linearly with face realism, reflecting activity increase in the visual and parietal cortex for more realistic faces. Results reveal differential effects of face stylization on distinct face processing stages and suggest a perceptual basis to the uncanny valley hypothesis.
  • Item
    Interfaces and Boundaries in Physics Based Simulation of Solids and Fluids
    (2018-07-10) Koschier, Dan Alexander
    Recent developments concerning the numerical simulation of solid objects and fluid flows in the field of computer graphics have opened up a plethora of new possibilities in applications such as special effect productions, animated movies, Virtual Reality (VR) applications, medical simulators, and computer games. Despite various techniques for the simulation of solids and fluid flows exist, the accurate incorporation of complex boundary geometries and interface models still poses a great challenge. Nevertheless, a robust handling of these interface descriptions is inevitable for a wide range of applications. Besides other purposes, interface models are frequently used to represent the boundary surfaces of solid objects, the layer between different materials, the boundary geometry of domains trapping a fluid, or even to represent cuts, tears, or cracks separating the material within an object. In order to be able to simulate even more complex phenomena and to enhance existing approaches, advanced methods and new techniques for the efficient and robust numerical simulation of solids and fluids with complex interfaces have to be developed. The contributions of this thesis are organized into three parts. In the first part, two novel methods based on Finite Element (FE) discretizations for the simulation of brittle fracture and cutting of deformable solids are presented. The first chapter in this part focuses on the physically motivated generation of brittle fractures using an adaptive stress analysis. While this approach captures crack interfaces by explicit remeshing and element duplication, the approach described in the second chapter of the first part captures the interface implicitly by using enrichment functions that are directly embedded into the FE discretization. The enrichment based technique is able to capture even highly complex and finely structured cuts with high accuracy while any form of remeshing is completely avoided. The second part of this thesis is concerned with a novel discretization approach for implicit interface representations. An arbitrary surface in three-dimensional space can be represented as an isosurface of a signed distance function. In the first step, the novel approach discretizes the signed distance function into a grid structure using piecewise polynomials. Subsequently, the initial discretization is refined in order to improve the discretization accuracy. The presented method is the first approach that not only refines the grid cells spatially but also varies the degree of the polynomial basis. With this approach even highly complicated surfaces can be accurately discretized while keeping the memory consumption to a minimum. In the third and final part of this thesis a novel approach for the simulation of incompressible fluids and a method to handle non-penetration boundary conditions using the novel concept of precomputed density maps are presented. Building on the Navier-Stokes equations for isothermal incompressible fluids the partial differential equation is spatially discretized using the Smoothed Particle Hydrodynamics (SPH) formalism. Incompressibility is then ensured using a novel pressure solver that enforces both a constant density field throughout the fluid and a divergence-free velocity field. In order to enforce non-penetration an implicit representation of the boundary interface is constructed and a density map is precomputed. Using the novel concept of density maps, non-penetration boundary conditions can be handled using efficient lookups into the map with constant complexity while the requirement to sample the boundary interface geometry with particles vanishes.
  • Item
    Local Geometry Processing for Deformations of Non-Rigid 3D Shapes
    (2018-06-19) Melzi, Simone
    Geometry processing and in particular spectral geometry processing deal with many different deformations that complicate shape analysis problems for non-rigid 3D objects. Furthermore, point-wise description of surfaces has increased relevance for several applications such as shape correspondences and matching, shape representation, shape modelling and many others. In this thesis we propose four local approaches to face the problems generated by the deformations of real objects and improving the point-wise characterization of surfaces. Differently from global approaches that work simultaneously on the entire shape we focus on the properties of each point and its local neighbour. Global analysis of shapes is not negative in itself. However, having to deal with local variations, distortions and deformations, it is often challenging to relate two real objects globally. For this reason, in the last decades, several instruments have been introduced for the local analysis of images, graphs, shapes and surfaces. Starting from this idea of localized analysis, we propose both theoretical insights and application tools within the local geometry processing domain. In more detail, we extend the windowed Fourier transform from the standard Euclidean signal processing to different versions specifically designed for spectral geometry processing. Moreover, from the spectral geometry processing perspective, we define a new family of localized basis for the functional space defined on surfaces that improve the spatial localization for standard applications in this field. Finally, we introduce the discrete time evolution process as a framework that characterizes a point through its pairwise relationship with the other points on the surface in an increasing scale of locality. The main contribute of this thesis is a set of tools for local geometry processing and local spectral geometry processing that could be used in standard useful applications. The overall observation of our analysis is that localization around points could factually improve the geometry processing in many different applications.
  • Item
    Real-Time Generative Hand Modeling and Tracking
    (EPFL, 2018-08-30) Tkach, Anastasia
    In our everyday life we interact with the surrounding environment using our hands. A main focus of recent research has been to bring such interaction to virtual objects, such as the ones projected in virtual reality devices, or super-imposed as holograms in AR/MR headsets. For these applications, it is desirable for the tracking technology to be robust, accurate, and have a seamless deployment. In this thesis we address these requirements by proposing an efficient and robust hand tracking algorithm, introducing a hand model representation that strikes a balance between accuracy and performance, and presenting the online algorithm for precise hand calibration. In the first part we present a robust method for capturing articulated hand motions in real time using a single depth camera. Our system is based on a realtime registration process that accurately reconstructs hand poses by fitting a 3D articulated hand model to depth images. We register the hand model using depth, silhouette, and temporal information. To effectively map low-quality depth maps to realistic hand poses, we regularize the registration with kinematic and temporal priors, as well as a data-driven prior built from a database of realistic hand poses. We present a principled way of integrating such priors into our registration optimization to enable robust tracking without severely restricting the freedom of motion. In the second part we propose the use of sphere-meshes as a novel geometric representation for real-time generative hand tracking. We derive an optimization to non-rigidly deform a template model to fit the user data in a number of poses. This optimization jointly captures the user’s static and dynamic hand geometry, thus facilitating high-precision registration. At the same time, the limited number of primitives in the tracking template allows us to retain excellent computational performance. We confirm this by embedding our models in an open source real-time registration algorithm to obtain a tracker steadily running at 60Hz. In the third part we introduce an online hand calibration method that learns the geometry as the user performs live in front of the camera, thus enabling seamless virtual interaction at the consumer level. The key novelty in our approach is an online optimization algorithm that jointly estimates pose and shape in each frame, and determines the uncertainty in such estimates. This knowledge allows the algorithm to integrate per-frame estimates over time, and build a personalized geometric model of the captured user. Our approach can easily be integrated in state-of-the-art continuous generative motion tracking software. We provide a detailed evaluation that shows how our approach achieves accurate motion tracking for realtime applications, while significantly simplifying the workflow of accurate hand performance capture.
  • Item
    Computational Design of Flexible Structures
    (URJC, 2018-01-31) Pérez Rodríguez, Jesús
    Computational fabrication technologies have revolutionized manufacturing by offering unprecedented control over the shape and material of the fabricated objects at accessible costs. These technologies allow users to design and create objects with arbitrary properties of motion, appearance or deformation. This rich environment spurs the creativity of designers and produces an increasing demand for computer-aided design tools that alleviate design complexity even for non-expert users. Motivated by this fact, in this thesis, we address the computational design and automatic fabrication of flexible structures, assemblies of interrelated elements that exhibit elastic behavior. We build upon mechanical simulation and numerical optimization to create innovative computational tools that model the attributes of the fabricated objects, predict their static deformation behavior, and automatically infer design attributes from user-specified goals. With this purpose, we propose a novel mechanical model for the efficient simulation of flexible rod meshes that avoid using numerical constraints. Then, we devise compact and expressive parameterizations of flexible structures, that naturally produce coherent designs. Our tools implement inverse design functionalities based on a sensitivity-based optimization algorithm, which we further extend to deal with local minimum solutions and highly constrained problems. Additionally, we propose interaction approaches that guide the user through the design process. Finally, we validate all these contributions by developing computer-aided design solutions that facilitate the creation of flexible rod meshes and Kirchhoff-Plateau surfaces. In the first part of this work, we overview the relevant foundations of mechanical simulation, analyze the optimization problem that arises from the inverse elastic design and discuss alternative solutions. Then, in the second part, we propose a computational method for the design of flexible rod meshes that automatically computes a fabricable design from user-de ned deformation examples. Finally, in the last part, we study the design and fabrication of Kirchhoff-Plateau surfaces and present a tool for interactively exploring the space of fabricable solutions.
  • Item
    Eye Reconstruction and Modeling for Digital Humans
    (2018) Bérard, Pascal
    The creation of digital humans is a long-standing challenge of computer graphics. Digital humans are tremendously important for applications in visual effects and virtual reality. The traditional way to generate digital humans is through scanning. Facial scanning in general has become ubiquitous in digital media, but most efforts have focused on reconstructing the skin only. The most important part of a digital human are arguably the eyes. Even though the human eye is one of the central features of an individual’s appearance, its shape and motion have so far been mostly approximated in the computer graphics community with gross simplifications. To fill this gap, we investigate in this thesis methods for the creation of eyes for digital humans. We present algorithms for the reconstruction, the modeling, and the rigging of eyes for computer animation and tracking applications. To faithfully reproduce all the intricacies of the human eye we propose a novel capture system that is capable of accurately reconstructing all the visible parts of the eye: the white sclera, the transparent cornea and the non-rigidly deforming colored iris. These components exhibit very different appearance properties and thus we propose a hybrid reconstruction method that addresses them individually, resulting in a complete model of both spatio-temporal shape and texture at an unprecedented level of detail. This capture system is time-consuming to use and cumbersome for the actor making it impractical for general use. To address these constraints we present the first approach for high-quality lightweight eye capture, which leverages a database of pre-captured eyes to guide the reconstruction of new eyes from much less constrained inputs, such as traditional single-shot face scanners or even a single photo from the internet. This is accomplished with a new parametric model of the eye built from the database, and a novel image-based model fitting algorithm. For eye animation we present a novel eye rig informed by ophthalmology findings and based on accurate measurements from a new multi-view imaging system that can reconstruct eye poses at submillimeter accuracy. Our goal is to raise the awareness in the computer graphics and vision communities that eye movement is more complex than typically assumed, and provide a new eye rig for animation that models this complexity. Finally, we believe that the findings of this thesis will alter current assumptions in computer graphics regarding human eyes, and our work has the potential to significantly impact the way that eyes of digital humans will be modelled in the future.
  • Item
    Scalable exploration of 3D massive models
    (2018-11-26) Jaspe Villanueva, Alberto
    This thesis introduces scalable techniques that advance the state-of-the-art in massive model creation and exploration. Concerning model creation, we present methods for improving reality-based scene acquisition and processing, introducing an efficient implementation of scalable out-of-core point clouds and a data-fusion approach for creating detailed colored models from cluttered scene acquisitions. The core of this thesis concerns enabling technology for the exploration of general large datasets. Two novel solutions are introduced. The first is an adaptive out-of-core technique exploiting the GPU rasterization pipeline and hardware occlusion queries in order to create coherent batches of work for localized shader-based ray tracing kernels, opening the door to out-of-core ray tracing with shadowing and global illumination. The second is an aggressive compression method that exploits redundancy in large models to compress data so that it fits, in fully renderable format, in GPU memory. The method is targeted to voxelized representations of 3D scenes, which are widely used to accelerate visibility queries on the GPU. Compression is achieved by merging subtrees that are identical through a similarity transform and by exploiting the skewed distribution of references to shared nodes to store child pointers using a variable bit-rate encoding The capability and performance of all methods are evaluated on many very massive real-world scenes from several domains, including cultural heritage, engineering, and gaming.
  • Item
    The high dynamic range imaging pipeline: Tone-mapping, distribution, and single-exposure reconstruction
    (Linköping University Electronic Press, 2018-06-08) Eilertsen, Gabriel
    Techniques for high dynamic range (HDR) imaging make it possible to capture and store an increased range of luminances and colors as compared to what can be achieved with a conventional camera. This high amount of image information can be used in a wide range of applications, such as HDR displays, image-based lighting, tone-mapping, computer vision, and post-processing operations. HDR imaging has been an important concept in research and development for many years. Within the last couple of years it has also reached the consumer market, e.g. with TV displays that are capable of reproducing an increased dynamic range and peak luminance. This thesis presents a set of technical contributions within the field of HDR imaging. First, the area of HDR video tone-mapping is thoroughly reviewed, evaluated and developed upon. A subjective comparison experiment of existing methods is performed, followed by the development of novel techniques that overcome many of the problems evidenced by the evaluation. Second, a largescale objective comparison is presented, which evaluates existing techniques that are involved in HDR video distribution. From the results, a first open-source HDR video codec solution, Luma HDRv, is built using the best performing techniques. Third, a machine learning method is proposed for the purpose of reconstructing an HDR image from one single-exposure low dynamic range (LDR) image. The method is trained on a large set of HDR images, using recent advances in deep learning, and the results increase the quality and performance significantly as compared to existing algorithms. The areas for which contributions are presented can be closely inter-linked in the HDR imaging pipeline. Here, the thesis work helps in promoting efficient and high-quality HDR video distribution and display, as well as robust HDR image reconstruction from a single conventional LDR image.
  • Item
    On-site Surface Reflectometry
    (EThOS, 2017-08-26) Riviere, Jérémy
    The rapid development of Augmented Reality (AR) and Virtual Reality (VR) applications over the past years has created the need to quickly and accurately scan the real world to populate immersive, realistic virtual environments for the end user to enjoy. While geometry processing has already gone a long way towards that goal, with self-contained solutions commercially available for on-site acquisition of large scale 3D models, capturing the appearance of the materials that compose those models remains an open problem in general uncontrolled environments. The appearance of a material is indeed a complex function of its geometry, intrinsic physical properties and furthermore depends on the illumination conditions in which it is observed, thus traditionally limiting the scope of reflectometry to highly controlled lighting conditions in a laboratory setup. With the rapid development of digital photography, especially on mobile devices, a new trend in the appearance modelling community has emerged, that investigates novel acquisition methods and algorithms to relax the hard constraints imposed by laboratory-like setups, for easy use by digital artists. While arguably not as accurate, we demonstrate the ability of such self-contained methods to enable quick and easy solutions for on-site reflectometry, able to produce compelling, photo-realistic imagery. In particular, this dissertation investigates novel methods for on-site acquisition of surface reflectance based on off-the-shelf, commodity hardware. We successfully demonstrate how a mobile device can be utilised to capture high quality reflectance maps of spatially-varying planar surfaces in general indoor lighting conditions. We further present a novel methodology for the acquisition of highly detailed reflectance maps of permanent on-site, outdoor surfaces by exploiting polarisation from reflection under natural illumination. We demonstrate the versatility of the presented approaches by scanning various surfaces from the real world and show good qualitative and quantitative agreement with existing methods for appearance acquisition employing controlled or semi-controlled illumination setups.
  • Item
    Efficient Methods for Computational Light Transport
    (Universidad de Zaragoza, 2018-10-26) Marco, Julio
    In this thesis we present contributions to different challenges of computational light transport. Light transport algorithms are present in many modern applications, from image generation for visual effects to real-time object detection. Light is a rich source of information that allows us to understand and represent our surroundings, but obtaining and processing this information presents many challenges due to its complex interactions with matter. This thesis provides advances in this subject from two different perspectives: steady-state algorithms, where the speed of light is assumed infinite, and transient-state algorithms, which deal with light as it travels not only through space but also time. Our steady-state contributions address problems in both offline and real-time rendering. We target variance reduction in offline rendering by proposing a new efficient method for participating media rendering. In real-time rendering, we target energy constraints of mobile devices by proposing a power-efficient rendering framework for real-time graphics applications. In transient-state we first formalize light transport simulation under this domain, and present new efficient sampling methods and algorithms for transient rendering. We finally demonstrate the potential of simulated data to correct multipath interference in Time-of-Flight cameras, one of the pathological problems in transient imaging.
  • Item
    Layered Models for Large Scale Time-Evolving Landscapes
    (Université Grenoble Alpes, 2019-12-06) Cordonnier, Guillaume
    The development of new technologies and algorithms allows the interactive visualization of virtual worlds showing an increasing amount of details and spatial extent. The production of plausible landscapes within these worlds becomes a major challenge, not only because the important part that terrain features and ecosystems play in the quality and realism of 3D sceneries, but also from the editing complexity of large landforms at mountain range scales. Interactive authoring is often achieved by coupling editing techniques with computationally and time demanding numerical simulation, whose calibration is harder as the number of non-intuitive parameters increases. This thesis develops new methods for the simulation of large-scale landscapes. Our goal is to improve both the control and the realism of the synthetic scenes. Our strategy to increase the plausibility consists of building our methods on physically and geomorphologically-inspired laws: we develop new numerical methods, which, combined with intuitive control tools, improve user experience. By observing phenomena triggered by compression areas within the Earth's crust, we propose a method for the intuitive control of the uplift based on a metaphor on the sculpting of the tectonic plates. Combined with new efficient methods for fluvial and glacial erosion, this allows for the fast sculpting of large mountain ranges. In order to visualize the resulting landscapes withing human sight, we demonstrate the need of combining the simulation of various phenomena with different time spans, and we propose a stochastic simulation technique to solve this complex cohabitation. This methodology is applied to the simulation of geological processes such as erosion interleaved with ecosystems formation. This method is then implemented on the GPU, combining long term effects (snowfall, phase changes of water) with highly dynamics ones (avalanches, skiers impact). Our methods allow the simulation of the evolution of large scale, visually plausible landscapes, while accounting for user control. These results were validated by user studies as well as comparisons with data obtained from real landscapes.
  • Item
    Shape Processing for Content Generation
    (2018) Schinko, Christoph
    The thesis "Shape Processing for Content Generation" by Christoph Schinko presents work on generative modeling, novel applications for inverse generative modeling, and visualization systems. These areas are regarded as steps in the context of shape processing, hence the thesis is structured that way. After defining the term shape, the first part of the thesis is concerned with shape descriptions. While some shape descriptions are of abstract nature, others can be directly used, for example, in the field of computer aided geometric design. The process of working with shape descriptions is called shape modeling. This topic includes primitive modeling using 3D modeling software or scene description languages, semantic modeling dealing with meta data, and generative modeling using domain specific information. An application for generative modeling in the context of wedding rings is implemented using a domain specific language for generative modeling { the Generative Modeling Language (GML). The multitude of involved platforms (the GML is implemented in C++, the postfix notation of the language itself is similar to Adobe Postscript, the application is targeted for the web) has inspired the idea to create an innovative meta-modeler approach called "Euclides". Its innovative concept of using a beginner-friendly syntax in combination with translation back-ends for various different platforms presents a foundation for the platform-independent creation of generative building blocks. This approach significantly reduces the effort for implementing and maintaining generative description for different platforms. Building up on previous work on finding the best generative description of one or several given instances of an object class, an application to analyze digitized objects in terms of changes and damages is presented. The system automatically combines generative descriptions with reconstructed objects and performs a nominal/actual value comparison. By applying the variances of the reconstructed objects to a different parameter set of the generative description, new shapes can be created. With this novel approach, the design of shapes using both low-level details and high-level shape parameters is possible. The last step in the context of shape processing is concerned with visualization systems for humans to perceive and interact with shapes. In this context, a novel method to project a coherent, seamless and perspectively corrected image from one particular viewpoint using an arbitrary number of projectors is presented. The approach distinguishes itself by being quick and efficient. The last contribution to this topic is describing an optimized stereoscopic display based on parallax barriers for a driving simulator.
  • Item
    Generative Methods for Data Completion in Shape Driven Systems
    (2018) Krispel, Ulrich
    In many application domains, such as building planning, construction, or documentation, it is of high importance to acquire a digital representation of the shape of real world objects, e.g. for visualization or documentation purposes. Such objects are often part of a class or domain of similarly structured objects; and often complex objects, such as houses, are composed by simpler objects, such as walls, doors and windows. Especially man-made objects exhibit such structure, mostly due to manufacturability and design reasons. A rich digital representation of a complex object consists not only of its shape, but also its structure, i.e. the composition hierarchy of simpler objects. A more general way to represent such a composition hierarchy is a generative model, that generates the structure upon evaluation; a parametric generative model can generate a whole class of similarly structured objects. In this thesis, I review shape-based methods for generative creation of models, and present a novel system for generative forward modeling based on shape grammars. Furthermore, I present two methods for solving the inverse problem: acquiring a rich digital representation of real-world objects from measurements and utilizing a generative model of prior domain knowledge. Using this prior knowledge, it is now possible to complete missing features, or reduce measurement errors. The first method parses the hierarchical structure of a building façade, given an ortho photo and a grammar that describes architectural constraints. The second method yields a hypothesis of electrical wiring inside walls, given optical measurements (point clouds and photographs), and a grammar that describes the technical standards.