2022

Permanent URI for this collection


Detail-driven Geometry Processing Pipeline using Neural Networks

Wang, Yifan

Homogenizing Yarn Simulations: Large-scale mechanics, small-scale detail, and quantitative fitting

Sperl, Georg

Analysis and Generation of Quality Polytopal Meshes with Applications to the Virtual Element Method

Sorgente, Tommaso

Procedural noises for the design of small-scale structures in Additive Manufacturing

Tricard, Thibault

Algorithms for Data-Driven Geometric Stylization & Acceleration

Liu, Hsueh-Ti Derek

Deep Learning Interpretability with Visual Analytics: Exploring Reasoning and Bias Exploitation

Jaunet, Theo

Registration of Heterogenous Data for Urban Modeling

Djahel, Rahima

Scaling Up Medical Visualization: Multi-Modal, Multi-Patient, and Multi-Audience Approaches for Medical Data Exploration, Analysis and Communication

Moerth, Eric

Interactive Authoring of 3D Shapes Represented as Programs

Michel, Élie

Data-driven models of 3D avatars and clothing for virtual try-on

Santesteban, Igor

Latency Hiding and High Fidelity Novel View Synthesis on Thin Clients using Decoupled Streaming Rendering from Powerful Servers

Hladký, Jozef

Drawing On Surfaces

Mancinelli, Claudio

Computer Vision and Deep Learning based road monitoring towards a Connected, Cooperative and Automated Mobility

Iparraguirre, Olatz

Efficient and High Performing Biometrics: Towards Enabling Recognition in Embedded Domains

Boutros, Fadi

Topological Aspects of Maps Between Surfaces

Born, Janis


Browse

Recent Submissions

Now showing 1 - 15 of 15
  • Item
    Detail-driven Geometry Processing Pipeline using Neural Networks
    (ETH Research Collection, 2022-01) Wang, Yifan
    Geometry processing is an established field in computer graphics, covering a variety of topics that embody decades-long research. However, with the pressing demand of reality digitization arising in recent years, classic geometry processing solutions are confronted with new challenges. For almost all geometry processing algorithms, a fundamental requirement is the ability to represent, preserve and reconstruct geometric details. Many established and highly-optimized geometry processing techniques rely heavily on educated user inputs and careful per-instance parameter tuning. However, fueled by the proliferation of consumer-level 3D acquisition devices and growing accessibility of shape modeling applications for ordinary users, there is a tremendous need for automatic geometry processing algorithms that perform robustly even under incomplete and distorted data. In order to transform existing techniques to meet the new requirements, a new mechanism is called for to distill the user expertise in algorithms. This thesis offers a solution to the aforementioned challenge by utilizing a contem- porary technology from the machine learning community, namely: deep learning. A general geometry processing pipeline includes the following key steps: raw data processing and enhancement, surface reconstruction from raw data, and shape modeling. Over the course of this thesis, we demonstrate how a variety of tasks in each step of the pipeline can be automated and, more importantly, strengthened by incorporating deep learning to leverage consistencies and high-level semantic priors from data. Specifically, this thesis proposes two point-based geometry processing algorithms that contribute to the raw data processing step, as well as two algorithms involving implicit representations for the surface reconstruction step, and one shape defor- mation algorithm for the last shape modeling step of the geometry processing pipeline. We demonstrate that, by designing suitable deep learning paradigms and integrating them in the existing geometry processing pipeline, we can achieve substantial progress with little or no user guidance especially for challenging, e.g. noise-ridden, undersampled or unaligned, inputs. Correspondingly, the contribu- tions in the thesis aim to enable autonomous and large-scale geometry processing and drive forward the ongoing transition to digitized reality.
  • Item
    Homogenizing Yarn Simulations: Large-scale mechanics, small-scale detail, and quantitative fitting
    (Institute of Science and Technology Austria, 2022-09-09) Sperl, Georg
    The complex yarn structure of knitted and woven fabrics gives rise to both a mechanical and visual complexity. The small-scale interactions of yarns colliding with and pulling on each other result in drastically different large-scale stretching and bending behavior, introducing anisotropy, curling, and more. While simulating cloth as individual yarns can reproduce this complexity and match the quality of real fabric, it may be too computationally expensive for large fabrics. On the other hand, continuum-based approaches do not need to discretize the cloth at a stitch-level, but it is non-trivial to find a material model that would replicate the large-scale behavior of yarn fabrics, and they discard the intricate visual detail. In this thesis, we discuss three methods to try and bridge the gap between small-scale and large-scale yarn mechanics using numerical homogenization: fitting a continuum model to periodic yarn simulations, adding mechanics-aware yarn detail onto thin-shell simulations, and quantitatively fitting yarn parameters to physical measurements of real fabric. To start, we present a method for animating yarn-level cloth effects using a thin-shell solver. We first use a large number of periodic yarn-level simulations to build a model of the potential energy density of the cloth, and then use it to compute forces in a thin-shell simulator. The resulting simulations faithfully reproduce expected effects like the stiffening of woven fabrics and the highly deformable nature and anisotropy of knitted fabrics at a fraction of the cost of full yarn-level simulation. While our thin-shell simulations are able to capture large-scale yarn mechanics, they lack the rich visual detail of yarn-level simulations. Therefore, we propose a method to animate yarn-level cloth geometry on top of an underlying deforming mesh in a mechanics-aware fashion in real time. Using triangle strains to interpolate precomputed yarn geometry, we are able to reproduce effects such as knit loops tightening under stretching at negligible cost. Finally, we introduce a methodology for inverse-modeling of yarn-level mechanics of cloth, based on the mechanical response of fabrics in the real world. We compile a database from physical tests of several knitted fabrics used in the textile industry spanning diverse physical properties like stiffness, nonlinearity, and anisotropy. We then develop a system for approximating these mechanical responses with yarn-level cloth simulation, using homogenized shell models to speed up computation and adding some small-but-necessary extensions to yarn-level models used in computer graphics.
  • Item
    Analysis and Generation of Quality Polytopal Meshes with Applications to the Virtual Element Method
    (University of Genoa, Department of Mathematics, 2022-08-31) Sorgente, Tommaso
    This thesis explores the concept of the quality of a mesh, the latter being intended as the discretization of a two- or three- dimensional domain. The topic is interdisciplinary in nature, as meshes are massively used in several fields from both the geometry processing and the numerical analysis communities. The goal is to produce a mesh with good geometrical properties and the lowest possible number of elements, able to produce results in a target range of accuracy. In other words, a good quality mesh that is also cheap to handle, overcoming the typical trade-off between quality and computational cost. To reach this goal, we first need to answer the question: “How, and how much, does the accuracy of a numerical simulation or a scientific computation (e.g., rendering, printing, modeling operations) depend on the particular mesh adopted to model the problem? And which geometrical features of the mesh most influence the result?” We present a comparative study of the different mesh types, mesh generation techniques, and mesh quality measures currently available in the literature related to both engineering and computer graphics applications. This analysis leads to the precise definition of the notion of quality for a mesh, in the particular context of numerical simulations of partial differential equations with the virtual element method, and the consequent construction of criteria to determine and optimize the quality of a given mesh. Our main contribution consists in a new mesh quality indicator for polytopal meshes, able to predict the performance of the virtual element method over a particular mesh before running the simulation. Strictly related to this, we also define a quality agglomeration algorithm that optimizes the quality of a mesh by wisely agglomerating groups of neighboring elements. The accuracy and the reliability of both tools are thoroughly verified in a series of tests in different scenarios.
  • Item
    Procedural noises for the design of small-scale structures in Additive Manufacturing
    (2022-04-08) Tricard, Thibault
    The democratization of AM has sparked a renewed interest in its potential applications. Among these, the ability to print small-scale structures is particularly promising: The geometry of internal small-scales structures directly influences the physical properties of the final parts. Thus, finding novel small-scales structures producing specific target properties expands the possibilities offered to additive manufacturing users, unlocking new potential applications in soft-robotics, for the design of prosthetics and orthoses, and for mechanical engineering at large. However, to be helpful in the wild, these small-scale structures have to expose controls over the properties they trigger -- and allow their variation in space -- so as to adapt their behavior to the user's intent. Interestingly, this type of spatial control over properties has been extensively studied in Computer Graphics, in particular for texture synthesis. The objective of this thesis is to enable the same type of spatial control that is achieved by texture synthesis methods, for the synthesis of small-scale structures in Additive Manufacturing. In particular, I focused on defining strongly oriented small-scale structures. These trigger extremely anisotropic properties within the parts, a type of behavior that has not been extensively covered in prior works. To achieve this, I proposed to revisit the \textit{procedural} formulations developed for texture synthesis in Computer Graphics, where each subpart of a pattern can be computed independently, only following local information. I successfully applied this approach to the generation of complex, oriented small-scale structures in large volumes. My first contribution is a novel approach for efficiently synthesizing highly contrasted oscillating patterns, that allows to closely follow property fields such as orientation and density while still being computed locally. I demonstrated this approach for texturing applications as well as for the synthesis of strongly oriented, anisotropic multi-material small-scale structures. This first method generates patterns that exhibit local defects, and therefore my second contribution extended this work to formulate a low-cost, efficient regularization technique that rectifies the oscillations. This led to the synthesis of freely orientable, self-supporting structures that can be used to trigger programmed deformations in 3D printed objects. My third contribution explores how to use a similar approach to define trajectories in fully filled 3D printed parts, under orientation objectives. By adjusting the phase of the oscillations, we are able to break the spatial alignments along the build direction that would otherwise result in local weaknesses in the produced parts.
  • Item
    Algorithms for Data-Driven Geometric Stylization & Acceleration
    (University of Toronto, 2022-09-29) Liu, Hsueh-Ti Derek
    In this thesis, we investigate computer algorithms for creating stylized 3D digital content and numerical tools for processing high-resolution geometric data. This thesis first addresses the problem of geometric stylization. Existing 3D content creation tools lack support for creating stylized 3D assets. They often require years of professional training and are tedious for creating complex geometries. One goal of this thesis is to address such a difficulty by presenting a novel suite of easy-to-use stylization algorithms. This involves a differentiable rendering technique to generalize image filters to filter 3D objects and a machine learning approach to renovate classic modeling operations. In addition, we address the problem by proposing an optimization framework for stylizing 3D shapes. We demonstrate how these new modeling tools can lower the difficulties of stylizing 3D geometric objects. The second part of the thesis focuses on scalability. Most geometric algorithms suffer from expensive computation costs when scaling up to high-resolution meshes. The computation bottleneck of these algorithms often lies in fundamental numerical operations, such as solving systems of linear equations. In this thesis, we present two directions to overcome such challenges. We first show that it is possible to coarsen a geometry and enjoy the efficiency of working on coarsened representation without sacrificing the quality of solutions. This is achieved by simplifying a mesh while preserving its spectral properties, such as eigenvalues and eigenvectors of a differential operator. Instead of coarsening the domain, we also present a scalable geometric multigrid solver for curved surfaces. We show that this can serve as a drop-in replacement of existing linear solvers to accelerate several geometric applications, such as shape deformation and physics simulation. The resulting algorithms in this thesis can be used to develop data-driven 3D stylization tools for inexperienced users and for scaling up existing geometry processing pipelines.
  • Item
    Deep Learning Interpretability with Visual Analytics: Exploring Reasoning and Bias Exploitation
    (2022-05-16) Jaunet, Theo
    In the last couple of years, Artificial Intelligence (AI) and Machine Learning have evolved, from research domains addressed in laboratories far from the public eye, to technology deployed on industrial scale widely impacting our daily lives. This trend has started to raise legitimate concerns, as it is also used to address critical problems like finance and autonomous driving, in which decisions can have a life-threatening impact. Since a large part of the underlying complexity of the decision process is learned from massive amounts of data, it remains unknown to both the builders of those models and to the people impacted by them how models take decisions. This led to the new field of eXplainable AI (XAI) and the problem of analyzing the behavior of trained models to shed their reasoning modes and the underlying biases they are subject to. This thesis contributes to this emerging field with the design of novel visual analytics systems tailored to the study and improvement of interpretability of Deep Neural Networks. Our goal was to empower experts with tools helping them to better interpret the decisions of their models. We also contributed with explorable applications designed to introduce Deep Learning methods to non-expert audiences. Our focus was on the under-explored challenge of interpreting and improving models for different applications such as robotics, where important decisions must be taken from high-dimensional and low-level inputs such as images.
  • Item
    Registration of Heterogenous Data for Urban Modeling
    (2022-06-22) Djahel, Rahima
    Indoor/Outdoor modeling of buildings is an important issue in the field of building life cycle management. It is seen as a joint process where the two aspects collaborate to take advantage of their semantic and geometric complementary. This global approach will allow a more complete, correct, precise and coherent reconstruction of the buildings. This thesis is part of the Building Indoor/Outdoor Modeling (BIOM) ANR project that aims at automatic, simultaneous indoor and outdoor modelling of buildings from image and dense point clouds. The first ambition of the BIOM ANR project is to integrate heterogeneous data sources for buildings modeling. The heterogeneity is both in: data type (image/ LiDAR data), acquisition platform (Terrestrial/ Aerial), acquisition mode (dynamic/static) and point of view (indoor/outdoor). The first issue of such modeling is thus to precisely register this data. The work carried out has confirmed that the environment and the type of data drive the choice of the registration algorithm. Our contribution consists in exploiting the physical and geometric properties of the data and the acquisition platforms in order to propose potential solutions for all the registration problems encountered by the project. As in a building environment, most objects are composed of geometric primitives (planar polygons, straight lines, openings), we chose to introduce registration algorithms based on these primitives. The basic idea of these algorithms consists in the definition of a global energy between the extracted primitives from the data-sets to register and the proposal of a robust method for optimizing this energy based on the RANSAC paradigm. Our contribution ranging from the proposal of robust methods to extract the selected primitives to the integration of these primitives in an efficient registration framework. Our solutions have exceeded the limitations of existing algorithms and have proven their effectiveness in solving the challenging problems encountered by the project such as the indoor (static mode)/outdoor (dynamic mode) registration, image/LiDAR data registration, and aerial/terrestrial registration.
  • Item
    Scaling Up Medical Visualization: Multi-Modal, Multi-Patient, and Multi-Audience Approaches for Medical Data Exploration, Analysis and Communication
    (The University of Bergen, 2022-09-02) Moerth, Eric
    Medical visualization is one of the most application-oriented areas of visualization research. Close collaboration with medical experts is essential for interpreting medical imaging data and creating meaningful visualization techniques and visualization applications. Cancer is one of the most common causes of death, and with increasing average age in developed countries, gynecological malignancy case numbers are rising. Modern imaging techniques are an essential tool in assessing tumors and produce an increasing number of imaging data radiologists must interpret. Besides the number of imaging modalities, the number of patients is also rising, leading to visualization solutions that must be scaled up to address the rising complexity of multi-modal and multi-patient data. Furthermore, medical visualization is not only targeted toward medical professionals but also has the goal of informing patients, relatives, and the public about the risks of certain diseases and potential treatments. Therefore, we identify the need to scale medical visualization solutions to cope with multi-audience data. This thesis addresses the scaling of these dimensions in different contributions we made. First, we present our techniques to scale medical visualizations in multiple modalities. We introduced a visualization technique using small multiples to display the data of multiple modalities within one imaging slice. This allows radiologists to explore the data efficiently without having several juxtaposed windows. In the next step, we developed an analysis platform using radiomic tumor profiling on multiple imaging modalities to analyze cohort data and to find new imaging biomarkers. Imaging biomarkers are indicators based on imaging data that predict clinical outcome related variables. Radiomic tumor profiling is a technique that generates potential imaging biomarkers based on first and second-order statistical measurements. The application allows medical experts to analyze the multi-parametric imaging data to find potential correlations between clinical parameters and the radiomic tumor profiling data. This approach scales up in two dimensions, multi-modal and multi-patient. In a later version, we added features to scale the multi-audience dimension by making our application applicable to cervical and prostate cancer data and the endometrial cancer data the application was designed for. In a subsequent contribution, we focus on tumor data on another scale and enable the analysis of tumor sub-parts by using the multi-modal imaging data in a hierarchical clustering approach. Our application finds potentially interesting regions that could inform future treatment decisions. In another contribution, the digital probing interaction, we focus on multi-patient data. The imaging data of multiple patients can be compared to find interesting tumor patterns potentially linked to the aggressiveness of the tumors. Lastly, we scale the multi-audience dimension with our similarity visualization applicable to endometrial cancer research, neurological cancer imaging research, and machine learning research on the automatic segmentation of tumor data. In contrast to the previously highlighted contributions, our last contribution, ScrollyVis, focuses primarily on multi-audience communication. We enable the creation of dynamic scientific scrollytelling experiences for a specific or general audience. Such stories can be used for specific use cases such as patient-doctor communication or communicating scientific results via stories targeting the general audience in a digital museum exhibition. Our proposed applications and interaction techniques have been demonstrated in application use cases and evaluated with domain experts and focus groups. As a result, we brought some of our contributions to usage in practice at other research institutes. We want to evaluate their impact on other scientific fields and the general public in future work.
  • Item
    Interactive Authoring of 3D Shapes Represented as Programs
    (Institut Polytechnique de Paris, 2022-07-11) Michel, Élie
    Although hardware and techniques have considerably improved over the years at handling heavy content, digital 3D creation remains fairly complex, partly because the bottleneck also lies in the cognitive load imposed over the designers. A recent shift to higher-order representation of shapes, encoding them as computer programs that generate their geometry, enables creation pipelines that better manage the cognitive load, but this also comes with its own sources of friction. We study in this thesis new challenges and opportunities introduced by program-based representations of 3D shapes in the context of digital content authoring. We investigate ways for the interaction with the shapes to remain as much as possible in 3D space, rather than operating on abstract symbols in program space. This includes both assisting the creation of the program, by allowing manipulation in 3D space while still ensuring a good generalization upon changes of the free variables of the program, and helping one to tune these variables by enabling direct manipulation of the output of the program. We explore diversity of program-based representations, focusing various paradigms of visual programming interfaces, from the imperative directed acyclic graphs (DAG) to the declarative Wang tiles, through more hybrid approaches. In all cases we study shape programs that evaluate at interactive rate, so that they fit in a creation process, and we push this by studying synergies of program-based representations with real time rendering pipelines. We enable the use of direct manipulation methods on DAG output thanks to automated rewriting rules and a non-linear filtering of differential data. We help the creation of imperative shape programs by turning geometric selection into semantic queries and of declarative programs by proposing an interface-first editing scheme for authoring 3D content in Wang tiles. We extend tiling engines to handle continuous tile parameters and arbitrary slot graphs, and to suggest new tiles to add to the set. We blend shape programs into the visual feedback loop by delegating tile content evaluation to the real-time rendering pipeline or exploiting the program's semantics to drive an impostor-based level-of-details system. Overall, our series of contributions aims at leveraging program-based representations of shapes to make the process of authoring 3D digital scenes more of an artistic act and less of a technical task.
  • Item
    Data-driven models of 3D avatars and clothing for virtual try-on
    (2022-07) Santesteban, Igor
    Clothing plays a fundamental role in our everyday lives. When we choose clothing to buy or wear, we guide our decisions based on a combination of fit and style. For this reason, the majority of clothing is purchased at brick-and-mortar retail stores, after physical try-on to test the fit and style of several garments on our own bodies. Computer graphics technology promises an opportunity to support online shopping through virtual try-on, but to date virtual try-on solutions lack the responsiveness of a physical try-on experience. This thesis works towards developing new virtual try-on solutions that meet the demanding requirements of accuracy, interactivity and scalability. To this end, we propose novel data-driven models for 3D avatars and clothing that produce highly realistic results at a fraction of the computational cost of physics-based approaches. Throughout the thesis we also address common limitations of data-driven methods by using self-supervision mechanisms to enforce physical constraints and reduce the dependency on ground-truth data. This allows us to build efficient and accurate models with minimal preprocessing times.
  • Item
    Latency Hiding and High Fidelity Novel View Synthesis on Thin Clients using Decoupled Streaming Rendering from Powerful Servers
    (Universität des Saarlandes, 2022) Hladký, Jozef
    Highly responsive 3D applications with state-of-the-art visual fidelity have always been associated with heavy immobile workstation hardware. By offloading demanding computations to powerful servers in the cloud, streaming 3D content from the data center to a thin client can deliver high fidelity responsive experience that is indistinguishable from the content computed locally on a powerful workstation. We introduce methods suitable for this scenario that enable network latency hiding. In the first part, we introduce a novel high-dimensional space---the camera offset space---and show how it can help to identify an analytical potentially visible set of geometry valid for a range of camera translational and rotational offsets. We demonstrate an efficient parallel implementation of the visibility resolution algorithm which leads to a first-ever method for computing a PVS that is valid for an analytical range of camera offsets, is computable in real-time without the need of pre-processing or spatial data structure construction and requires only raw triangle stream as an input. In the second part of the thesis, we focus on capturing the scene appearance into structures that enable efficient encoding and decoding, transmission, low memory footprint, and high-fidelity high-framerate reconstruction on the client. Multiple strategies for shading sample distribution and texture atlas packing layouts are presented and analyzed for shading reconstruction quality, packing and compression efficiency. The third part of the thesis presents a data structure that jointly encodes both appearance and geometry into a texture atlas. The scene G-Buffer is processed to construct coarse low-resolution geometric proxies which capture the scene appearance and simple planar surfaces. These proxies can be locally augmented with high resolution data to capture complex geometry in sufficient detail, achieving efficient sample distribution and allocation. Capturing the scene from multiple views enables disocclusion support and allows network latency hiding on a thin client device.
  • Item
    Drawing On Surfaces
    (2022-07-25) Mancinelli, Claudio
    Vector graphics in $2$D is consolidated since decades, as is supported in many design applications, such as Adobe Illustrator \cite{adobeillustrator}, and languages, like Scalable Vector Graphics (SVG) \cite{SVG}. In this thesis, we address the problem of designing algorithms that support the generation of vector graphics on a discrete surface. We require such algorithm to rely on the intrinsic geometry of the surface, and to support real time interaction on highly-tessellated meshes (few million triangles). Both of these requirements aim at mimicking the behavior of standard drawing systems in the Euclidean context in the following sense. Working in the intrinsic setting means that we consider the surface as our canvas, and any quantity needed to fulfill a given task will be computed directly on it, without resorting to any type of local/global parametrization or projection. In this way, we are sure that, once the theoretical limitations behind some given operation are properly handled, our result will always be consistent with the input regardless of the surface we are working with. As we will see, in some cases, this may imply that one geometric primitive cannot be indefinitely large, but must be contained in a proper subset of the surface. Requiring the algorithms to support real time interaction on large meshes makes possible to use them via a click-and-drag procedure, just as in the $2$D case. Both of these two requirements have several challenges. On the one hand, working with a metric different from the Euclidean one implies that most of the properties on which one relies on the plane are not preserved when considering a surface, so the conditions under which geometric primitives admit a well defined counterpart in the manifold setting need to be carefully investigated in order to ensure the robustness of our algorithms. On the other hand, the building block of most of such algorithms are geodesic paths and distances, which are known to be expensive operations in computer graphics, especially if one is interested in accurate results, which is our case. The purpose of this thesis, is to show how this problem can be addressed fulfilling all the above requirements. The final result will be a Graphical User Interface (GUI) endowed with all the main tools present in a $2$D drawing system that allow the user to generate geometric primitives on a mesh in robust manner and in real-time.
  • Item
    Computer Vision and Deep Learning based road monitoring towards a Connected, Cooperative and Automated Mobility
    (2022-11-11) Iparraguirre, Olatz
    The future of mobility will be connected, cooperative and autonomous. All vehicles on the road will be connected to each other as well as to the infrastructure. Traffic will be mixed and human-driven vehicles will coexist alongside self-driving vehicles of different levels of automation. This mobility model will bring greater safety and efficiency in driving, as well as more sustainable and inclusive transport. For this future to be possible, vehicular communications, as well as perception systems, become indispensable. Perception systems are capable of understanding the environment and adapting driving behaviour to it (following the trajectory, adjusting speed, overtaking manoeuvres, lane changes, etc.). However, these autonomous systems have limitations that make their operation not possible in certain circumstances (low visibility, dense traffic, poor infrastructure conditions, etc.). This unexpected event would trigger the system to transfer control to the driver, which could become an important safety weakness. At this point, communication between different elements of the road network becomes important since the impact of these unexpected events can be mitigated or even avoided as long as the vehicle has access to dynamic road information. This information would make it possible to anticipate the disengagement of the automated system and to adapt the driving task or prepare the control transfer less abruptly. In this thesis, we propose to develop a road monitoring system that, installed in vehicles travelling on the road network, performs automatic auscultation of the status of the infrastructure and can detect critical events for driving. In the context of this research work, the aim is to develop three independent modules: 1) a system for detecting fog and classifying the degree of visibility; 2) a system for recognising traffic signs; 3) a system for detecting defects in road lines. This solution will make it possible to generate cooperative services for the communication of critical road events to other road users. It will also allow the inventory of assets to facilitate the management of maintenance and investment tasks for infrastructure managers. In addition, it also opens the way for autonomous driving by being able to better manage transitions of control in critical situations and by preparing the infrastructure for the reception of self-driving vehicles with high levels of automation
  • Item
    Efficient and High Performing Biometrics: Towards Enabling Recognition in Embedded Domains
    (2022-06-14) Boutros, Fadi
    The growing need for reliable and accurate recognition solutions along with the recent innovations in deep learning methodologies has reshaped the research landscape of biometric recognition. Developing efficient biometric solutions is essential to minimize the required computational costs, especially when deployed on embedded and low-end devices. This drives the main contributions of this work, aiming at enabling wide application range of biometric technologies. Towards enabling wider implementation of face recognition in use cases that are extremely limited by computational complexity constraints, this thesis presents a set of efficient models for accurate face verification, namely MixFaceNets. With a focus on automated network architecture design, this thesis is the first to utilize neural architecture search to successfully develop a family of lightweight face-specific architectures, namely PocketNets. Additionally, this thesis proposes a novel training paradigm based on knowledge distillation (KD), the multi-step KD, to enhance the verification performance of compact models. Towards enhancing face recognition accuracy, this thesis presents a novel margin-penalty softmax loss, ElasticFace, that relaxes the restriction of having a single fixed penalty margin. Occluded faces by facial masks during the recent COVID-19 pandemic presents an emerging challenge for face recognition. This thesis presents a solution that mitigates the effects of wearing a mask and improves masked face recognition performance. This solution operates on top of existing face recognition models and thus avoids the high cost of retraining existing face recognition models or deploying a separate solution for masked face recognition. Aiming at introducing biometric recognition to novel embedded domains, this thesis is the first to propose leveraging the existing hardware of head-mounted displays for identity verification of the users of virtual and augmented reality applications. This is additionally supported by proposing a compact ocular segmentation solution as a part of an iris and periocular recognition pipeline. Furthermore, an identity-preserving synthetic ocular image generation approach is designed to mitigate potential privacy concerns related to the accessibility to real biometric data and facilitate the further development of biometric recognition in new domains.
  • Item
    Topological Aspects of Maps Between Surfaces
    (2022) Born, Janis
    The generation of high-quality maps between surfaces of 3D shapes is a fundamental task with countless applications in geometry processing. There is a particular demand for maps that offer strict validity properties such as continuity and bijectivity, i.e. surface homeomorphisms. Such maps not only define a geometric one-to-one correspondence between surface points, but also a matching of topological features: an identification of handles and tunnels and how the map wraps around them. Finding a natural, low-distortion surface homeomorphism between a given pair of shapes is a challenging design task that involves both combinatorial (topological) and continuous (geometric) degrees of freedom. However, while powerful methods exist to improve existing homeomorphisms through continuous modifications, these are limited to merely geometric updates, and hence cannot alter map topology. In this light, it is quite surprising that most existing techniques for the initial construction of homeomorphisms do not systematically deal with questions of map topology and instead relegate these issues to user input or ad-hoc solutions. Unfortunately, this lack of reliable and automatic methods for the critically important topological initialization has so far prevented a further automation of homeomorphic surface map generation. In this thesis, we aim to close this practical gap by devising new algorithms that specifically address the map-topological issues underlying the construction of surface homeomorphisms. Our theoretical foundation is the study of the mapping class group, an algebraic structure which characterizes the entire topological design space. We approach the task of map topology generation from two different angles, based on different mapping class representations: We propose a robust method for the construction of maps from sparse landmark correspondences, based on compatible layout embeddings. Our robust embedding strategy systematically searches for short, natural embeddings and therefore reliably avoids a range of sporadic topological initialization errors which can occur with previous heuristic approaches. Additionally, we introduce a novel algorithm to extract topological map descriptions from approximate, non-homeomorphic input maps. Such a purely abstract description of map topology may then be used to guide the construction of a proper homeomorphism. As our inference method is highly robust to a wide range of map defects and imperfect map representations, this effectively allows to delegate the difficult task of finding a natural map topology to specialized shape matching methods, which have grown increasingly capable. These advancements promote the further automation of map generation techniques in two regards: They vastly reduce the need for human supervision, and make the results of automatic shape matching methods accessible for topological initialization.