VMV18

Permanent URI for this collection

Stuttgart, Germany, October 10 – 12, 2018
Rendering
A Fast and Efficient Semi-guided Algorithm for Flat Coloring Line-arts
Sébastien Fourey, David Tschumperlé, and David Revoy
Interactive Interpolation of Metallic Effect Car Paints
Tim Golla and Reinhard Klein
Augmented Reality
Dynamic Environment Mapping for Augmented Reality Applications on Mobile Devices
Rafael Monroy, Matis Hudon, and Aljosa Smolic
WithTeeth: Denture Preview in Augmented Reality
Aleksandr Amirkhanov, Artem Amirkhanov, Matthias Bernhard, Zsolt Toth, Sabine Stiller, Andreas Geier, Eduard Gröller, and Gabriel Mistelbauer
Image Analysis and Visualization
The Parallel Eigenvectors Operator
Timo Oster, Christian Rössl, and Holger Theisel
Automatic Generation of Saliency-based Areas of Interest for the Visualization and Analysis of Eye-tracking Data
Wolfgang Fuhl, Thomas Kuebler, Thiago Santini, and Enkelejda Kasneci
Automatic Infant Face Verification via Convolutional Neural Networks
Leslie Wöhler, Hangjian Zhang, Georgia Albuquerque, and Marcus Magnor
Joint Session with GCPR I
Parameter Space Comparison of Inertial Particle Models
Jérôme Holbein and Tobias Günther
Scanning
Efficient Global Registration for Nominal/Actual Comparisons
Sarah Berkei, Max Limper, Christian Hörr, and Arjan Kuijper
Hierarchical Additive Poisson Disk Sampling
Alexander Dieckmann and Reinhard Klein
Data Structures and Volumes
Fast and Dynamic Construction of Bounding Volume Hierarchies Based on Loose Octrees
Feng Gu, Johannes Jendersie, and Thorsten Grosch
Compressed Bounding Volume Hierarchies for Efficient Ray Tracing of Disperse Hair
Magdalena Martinek, Marc Stamminger, Nikolaus Binder, and Alexander Keller
Efficient Subsurface Scattering Simulation for Time-of-Flight Sensors
David Bulczak and Andreas Kolb
Information and Geographic Visualization
Identifying Similar Eye Movement Patterns with t-SNE
Michael Burch
Correlated Point Sampling for Geospatial Scalar Field Visualization
Riccardo Roveri, Dirk J. Lehmann, Markus Gross, and Tobias Günther
Clustering for Stacked Edge Splatting
Moataz Abdelaal, Marcel Hlawatsch, Michael Burch, and Daniel Weiskopf
Joint Session with GCPR II
Painterly Rendering using Limited Paint Color Palettes
Thomas Lindemeier, Marvin Gülzow, and Oliver Deussen
Scientific Visualization
Web-based Volume Rendering using Progressive Importance-based Data Transfer
Finian Mwalongo, Michael Krone, Guido Reina, and Thomas Ertl
Interactive Visual Exploration of Line Clusters
Mathias Kanzler and Rüdiger Westermann

BibTeX (VMV18)
@inproceedings{
10.2312:vmv.20181247,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
A Fast and Efficient Semi-guided Algorithm for Flat Coloring Line-arts}},
author = {
Fourey, Sébastien
 and
Tschumperlé, David
 and
Revoy, David
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181247}
}
@inproceedings{
10.2312:vmv.20181248,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Interactive Interpolation of Metallic Effect Car Paints}},
author = {
Golla, Tim
 and
Klein, Reinhard
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181248}
}
@inproceedings{
10.2312:vmv.20181249,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Dynamic Environment Mapping for Augmented Reality Applications on Mobile Devices}},
author = {
Monroy, Rafael
 and
Hudon, Matis
 and
Smolic, Aljosa
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181249}
}
@inproceedings{
10.2312:vmv.20181250,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
WithTeeth: Denture Preview in Augmented Reality}},
author = {
Amirkhanov, Aleksandr
 and
Amirkhanov, Artem
 and
Bernhard, Matthias
 and
Toth, Zsolt
 and
Stiller, Sabine
 and
Geier, Andreas
 and
Gröller, Eduard
 and
Mistelbauer, Gabriel
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181250}
}
@inproceedings{
10.2312:vmv.20181251,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
The Parallel Eigenvectors Operator}},
author = {
Oster, Timo
 and
Rössl, Christian
 and
Theisel, Holger
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181251}
}
@inproceedings{
10.2312:vmv.20181252,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Automatic Generation of Saliency-based Areas of Interest for the Visualization and Analysis of Eye-tracking Data}},
author = {
Fuhl, Wolfgang
 and
Kuebler, Thomas
 and
Santini, Thiago
 and
Kasneci, Enkelejda
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181252}
}
@inproceedings{
10.2312:vmv.20181253,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Automatic Infant Face Verification via Convolutional Neural Networks}},
author = {
Wöhler, Leslie
 and
Zhang, Hangjian
 and
Albuquerque, Georgia
 and
Magnor, Marcus
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181253}
}
@inproceedings{
10.2312:vmv.20181254,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Parameter Space Comparison of Inertial Particle Models}},
author = {
Holbein, Jérôme
 and
Günther, Tobias
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181254}
}
@inproceedings{
10.2312:vmv.20181255,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Efficient Global Registration for Nominal/Actual Comparisons}},
author = {
Berkei, Sarah
 and
Limper, Max
 and
Hörr, Christian
 and
Kuijper, Arjan
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181255}
}
@inproceedings{
10.2312:vmv.20181256,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Hierarchical Additive Poisson Disk Sampling}},
author = {
Dieckmann, Alexander
 and
Klein, Reinhard
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181256}
}
@inproceedings{
10.2312:vmv.20181257,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Fast and Dynamic Construction of Bounding Volume Hierarchies Based on Loose Octrees}},
author = {
Gu, Feng
 and
Jendersie, Johannes
 and
Grosch, Thorsten
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181257}
}
@inproceedings{
10.2312:vmv.20181258,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Compressed Bounding Volume Hierarchies for Efficient Ray Tracing of Disperse Hair}},
author = {
Martinek, Magdalena
 and
Stamminger, Marc
 and
Binder, Nikolaus
 and
Keller, Alexander
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181258}
}
@inproceedings{
10.2312:vmv.20181260,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Identifying Similar Eye Movement Patterns with t-SNE}},
author = {
Burch, Michael
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181260}
}
@inproceedings{
10.2312:vmv.20181259,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Efficient Subsurface Scattering Simulation for Time-of-Flight Sensors}},
author = {
Bulczak, David
 and
Kolb, Andreas
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181259}
}
@inproceedings{
10.2312:vmv.20181261,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Correlated Point Sampling for Geospatial Scalar Field Visualization}},
author = {
Roveri, Riccardo
 and
Lehmann, Dirk J.
 and
Gross, Markus
 and
Günther, Tobias
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181261}
}
@inproceedings{
10.2312:vmv.20181263,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Painterly Rendering using Limited Paint Color Palettes}},
author = {
Lindemeier, Thomas
 and
Gülzow, J. Marvin
 and
Deussen, Oliver
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181263}
}
@inproceedings{
10.2312:vmv.20181262,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Clustering for Stacked Edge Splatting}},
author = {
Abdelaal, Moataz
 and
Hlawatsch, Marcel
 and
Burch, Michael
 and
Weiskopf, Daniel
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181262}
}
@inproceedings{
10.2312:vmv.20181264,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Web-based Volume Rendering using Progressive Importance-based Data Transfer}},
author = {
Mwalongo, Finian
 and
Krone, Michael
 and
Reina, Guido
 and
Ertl, Thomas
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181264}
}
@inproceedings{
10.2312:vmv.20181265,
booktitle = {
Vision, Modeling and Visualization},
editor = {
Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
}, title = {{
Interactive Visual Exploration of Line Clusters}},
author = {
Kanzler, Mathias
 and
Westermann, Rüdiger
}, year = {
2018},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-072-7},
DOI = {
10.2312/vmv.20181265}
}

Browse

Recent Submissions

Now showing 1 - 20 of 20
  • Item
    Frontmatter: VMV 2018: Vision, Modeling, and Visualization
    (The Eurographics Association, 2018) Beck, Fabian; Dachsbacher, Carsten; Sadlo, Filip; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
  • Item
    A Fast and Efficient Semi-guided Algorithm for Flat Coloring Line-arts
    (The Eurographics Association, 2018) Fourey, Sébastien; Tschumperlé, David; Revoy, David; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    We present a fast and efficient algorithm for the semi-supervised colorization of line-art images (e.g. hand-made cartoons), based on two successive steps: 1. A geometric analysis of the stroke contours, and their closing by splines/segments, and 2. A colorization step based on the filling of the corresponding connected components, either with random colors or by extrapolating user-defined color scribbles. Our processing technique performs image colorization with a similar quality as previous state of the arts algorithms, while having a lower algorithmic complexity, leading to more possible user interactivity.
  • Item
    Interactive Interpolation of Metallic Effect Car Paints
    (The Eurographics Association, 2018) Golla, Tim; Klein, Reinhard; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    Metallic car paints are visually complex materials that, among others effects, exhibit a view-dependent metallic sparkling, which is particularly difficult to recreate in computer graphics. While capturing real-world metallic paints is possible with specialized devices, creating these materials computationally poses a difficult problem. We present a method that allows for interactive interpolation between measured metallic automotive paints, which can be used to generate new realistic-looking metallic paint materials. By clustering the color information present in the measured bidirectional texture function (BTF) responsible for the metallic sparkling effect, we set up an optimal transport problem between metallic paints' appearances. The design of the problem facilitates efficiently finding a solution, based on which we generate a representation that allows for real-time generation of interpolated realistic materials. Interpolation happens smoothly, no flickering or other visual artifacts can be observed. The developed approach also enables to separately interpolate the larger-scale reflective properties, including the basic color hue, the local color hue, and the sparkling intensity of the metallic paint. Our method can be used intuitively in order to generate automotive paints with a novel appearance and explore the space of possible metallic paints spanned by given real-world measurements. The resulting materials are also well suited for real-time rendering in standard engines.
  • Item
    Dynamic Environment Mapping for Augmented Reality Applications on Mobile Devices
    (The Eurographics Association, 2018) Monroy, Rafael; Hudon, Matis; Smolic, Aljosa; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    Augmented Reality is a topic of foremost interest nowadays. Its main goal is to seamlessly blend virtual content in real-world scenes. Due to the lack of computational power in mobile devices, rendering a virtual object with high-quality, coherent appearance and in real-time, remains an area of active research. In this work, we present a novel pipeline that allows for coupled environment acquisition and virtual object rendering on a mobile device equipped with a depth sensor. While keeping human interaction to a minimum, our system can scan a real scene and project it onto a two-dimensional environment map containing RGB+Depth data. Furthermore, we define a set of criteria that allows for an adaptive update of the environment map to account for dynamic changes in the scene. Then, under the assumption of diffuse surfaces and distant illumination, our method exploits an analytic expression for the irradiance in terms of spherical harmonic coefficients, which leads to a very efficient rendering algorithm. We show that all the processes in our pipeline can be executed while maintaining an average frame rate of 31Hz on a mobile device.
  • Item
    WithTeeth: Denture Preview in Augmented Reality
    (The Eurographics Association, 2018) Amirkhanov, Aleksandr; Amirkhanov, Artem; Bernhard, Matthias; Toth, Zsolt; Stiller, Sabine; Geier, Andreas; Gröller, Eduard; Mistelbauer, Gabriel; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    Dentures are prosthetic devices replacing missing or damaged teeth, often used for dental reconstruction. Dental reconstruction improves the functional state and aesthetic appearance of teeth. State-of-the-art methods used by dental technicians typically do not include the aesthetic analysis, which often leads to unsatisfactory results for patients. In this paper, we present a virtual mirror approach for a dental treatment preview in augmented reality. Different denture presets are visually evaluated and compared by switching them on the fly. Our main goals are to provide a virtual dental treatment preview to facilitate early feedback, and hence to build the confidence and trust of patients in the outcome. The workflow of our algorithm is as follows. First, the face is detected and 2D facial landmarks are extracted. Then, 3D pose estimation of upper and lower jaws is performed and high-quality 3D models of the upper and lower dentures are fitted. The fitting uses the occlusal plane angle as determined mnually by dental technicians. To provide a realistic impression of the virtual teeth, the dentures are rendered with motion blur. We demonstrate the robustness and visual quality of our approach by comparing the results of a webcam to a DSLR camera under natural, as well as controlled lighting conditions.
  • Item
    The Parallel Eigenvectors Operator
    (The Eurographics Association, 2018) Oster, Timo; Rössl, Christian; Theisel, Holger; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    The parallel vectors operator is a prominent tool in visualization that has been used for line feature extraction in a variety of applications such as ridge and valley lines, separation and attachment lines, and vortex core lines. It yields all points in a 3D domain where two vector fields are parallel. We extend this concept to the space of tensor fields, by introducing the parallel eigenvectors (PEV) operator. It yields all points in 3D space where two tensor fields have real parallel eigenvectors. Similar to the parallel vectors operator, these points form structurally stable line structures. We present an algorithm for extracting these lines from piecewise linear tensor fields by finding and connecting all intersections with the cell faces of a data set. The core of the approach is a simultaneous recursive search both in space and on all possible eigenvector directions. We demonstrate the PEV operator on different analytic tensor fields and apply it to several data sets from structural mechanics simulations.
  • Item
    Automatic Generation of Saliency-based Areas of Interest for the Visualization and Analysis of Eye-tracking Data
    (The Eurographics Association, 2018) Fuhl, Wolfgang; Kuebler, Thomas; Santini, Thiago; Kasneci, Enkelejda; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    Areas of interest (AOIs) are a powerful basis for the analysis and visualization of eye-tracking data. They allow to relate eyetracking metrics to semantic stimulus regions and to perform further statistics. In this work, we propose a novel method for the automated generation of AOIs based on saliency maps. In contrast to existing methods from the state-of-the-art, which generate AOIs based on eye-tracking data, our method generates AOIs based solely on the stimulus saliency, mimicking thus our natural vision. This way, our method is not only independent of the eye-tracking data, but allows to work AOI-based even for complex stimuli, such as abstract art, where proper manual definition of AOIs is not trivial. For evaluation, we cross-validate support vector machine classifiers with the task of separating visual scanpaths of art experts from those of novices. The motivation for this evaluation is to use AOIs as projection functions and to evaluate their robustness on different feature spaces. A good AOI separation should result in different feature sets that enable a fast evaluation with a widely automated work-flow. The proposed method together with the data shown in this paper is available as part of the software EyeTrace [?] http://www.ti.unituebingen. de/Eyetrace.1751.0.html.
  • Item
    Automatic Infant Face Verification via Convolutional Neural Networks
    (The Eurographics Association, 2018) Wöhler, Leslie; Zhang, Hangjian; Albuquerque, Georgia; Magnor, Marcus; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    In this paper, we investigate how convolutional neural networks (CNN) can learn to solve the verification task for faces of young children. One of the main issues of automatic face verification approaches is how to deal with facial changes resulting from aging. Since the facial shape and features change drastically during early childhood, the recognition of children can be challenging even for human observers. Therefore, we design CNNs that take two infant photographs as input and verify whether they belong to the same child. To specifically train our CNNs to recognize young children, we collect a new infant face dataset including 4,528 face images of 42 subjects in the age range of 0 to 6 years. Our results show an accuracy of up to 85 percent for face verification using our dataset with no overlapping subjects between the training and test data, and 72 percent in the FG-NET dataset for the age range from 0 to 4 years.
  • Item
    Parameter Space Comparison of Inertial Particle Models
    (The Eurographics Association, 2018) Holbein, Jérôme; Günther, Tobias; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    In many meteorological and engineering problems, the motion of finite-sized objects of non-zero mass plays a crucial role, such as in air pollution, desertification, stirring of dust during helicopter navigation, or droplets in clouds or hurricanes. The motion of these so-called inertial particles can be modeled by equations of motion that place certain application-specific assumptions. These models are determined by parameters, such as the particle size, the Stokes number or the density ratio between particle and fluid. To describe the motion of finite-sized particles in an accurate and feasible way, one has to choose the most suitable particle model and its model parameters very carefully. In this paper, we present multiple interactive visualizations that allow us to compare different inertial particle models for a range of model parameters. To assess the similarities and disparities in the inertial pathline geometries in space-time, we first trace multiple inertial particles with varying model parameters from the same seed point and visualize their motion in space-time for different inertial particle models. Further, we find for a given inertial trajectory in one model, the parameters of the other model that fit this trajectory best. Finally, we offer a quantitative view of the pair-wise inertial trajectory distance for each possible parameter combination of two inertial particle models for a given seed point. By visually exploring this parameter space, we can find similarities and dissimilarities between parameter configurations, which guides the selection of the parameter model. Since all these visualizations only consider one single seed point, we extend the methods by displaying the results for multiple seed points in the same domain or by using stacked visualizations. We apply our method to multiple analytic and numerical vector fields for two inertial particle models.
  • Item
    Efficient Global Registration for Nominal/Actual Comparisons
    (The Eurographics Association, 2018) Berkei, Sarah; Limper, Max; Hörr, Christian; Kuijper, Arjan; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    We investigate global registration methods for Nominal/Actual comparisons, using precise, high-resolution 3D scans. First we summarize existing approaches and requirements for this field of application. We then demonstrate that a basic RANSAC strategy, along with a slightly modified version of basic building blocks, can lead to a high global registration performance at moderate registration times. Specifically, we introduce a simple feedback loop that exploits the fast convergence of the ICP algorithm to efficiently speed up the search for a valid global alignment. Using the example of 3D printed parts and range images acquired by two different high-precision 3D scanners for quality control, we show that our method can be efficiently used for Nominal/Actual comparison. For this scenario, the proposed algorithm significantly outperforms the current state of the art, with regards to registration time and success rate.
  • Item
    Hierarchical Additive Poisson Disk Sampling
    (The Eurographics Association, 2018) Dieckmann, Alexander; Klein, Reinhard; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    Generating samples of point clouds and meshes with blue noise characteristics is desirable for many applications in rendering and geometry processing. Working with laser-scanned or lidar point clouds, we usually find region with artifacts called scanlines and scan-edges. These regions are problematic for geometry processing applications, since it is not clear how many points should be selected to define a proper neighborhood. We present a method to construct a hierarchical additive poisson disk sampling from densely sampled point sets, which yield better point neighborhoods. It can be easily implemented using an octree data structure where each octree node contains a grid, called Modifiable Nested Octree [Sch14]. The generation of the sampling amounts to distributing the points over a hierarchy (octree) of resolution levels (grids) in a greedy manner. Propagating the distance constraint r through the hierarchy while drawing samples from the point set leads to a hierarchy of well distributed, random samplings. This ensures that in a disk with radius r, around a point, no other point upwards in the hierarchy is found. The sampling is additive in the sense that the union of points sets up to a certain hierarchy depth D is a poisson disk sampling. This makes it easy to select a resolution where the scan-artifacts have a lower impact on the processing result. The generated sampling can be made sensitive to surface features by a simple preprocessing step, yielding high quality low resolution poisson samplings of point clouds.
  • Item
    Fast and Dynamic Construction of Bounding Volume Hierarchies Based on Loose Octrees
    (The Eurographics Association, 2018) Gu, Feng; Jendersie, Johannes; Grosch, Thorsten; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    Many fast methods for constructing BVHs on the GPU only use the centroids of primitive bounding boxes and ignore the actual spatial extent of each primitive. We present a fast new way and a memory-efficient implementation to build a BVH from a loose octree for real-time ray tracing on fully dynamic scenes. Our memory-efficient implementation is an in-place method and generalizes the state-of-the-art parallel construction for LBVH to build the BVH from nodes of different levels.
  • Item
    Compressed Bounding Volume Hierarchies for Efficient Ray Tracing of Disperse Hair
    (The Eurographics Association, 2018) Martinek, Magdalena; Stamminger, Marc; Binder, Nikolaus; Keller, Alexander; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    Ray traced human hair is becoming more and more ubiquitous in photorealistic image synthesis. Despite hierarchical data structures for accelerated ray tracing, performance suffers from the bad separability inherent with ensembles of hair strands. We propose a compressed acceleration data structure that improves separability by adaptively subdividing hair fibers. Compression is achieved by storing quantized as well as oriented bounding boxes and an indexing scheme to specify curve segments instead of storing them. We trade memory for speed, as our approach may use more memory, however, in cases of highly curved hair we can double the number of traversed rays per second over prior work. With equal memory we still achieve a speed-up of up to 30%, with equal performance we can reduce memory by up to 30%.
  • Item
    Identifying Similar Eye Movement Patterns with t-SNE
    (The Eurographics Association, 2018) Burch, Michael; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    In this paper we describe an approach based on the t-distributed stochastic neighbor embedding (t-SNE) focusing on projecting high-dimensional eye movement data to two dimensions. The lower-dimensional data is then represented as scatterplots reflecting the local structure of the high-dimensional eye movement data and hence, providing a strategy to identify similar eye movement patterns. The scatterplots can be used as means to interact with and to further annotate and analyze the data for additional properties focusing on space, time, or participants. Since t-SNE oftentimes produces groups of data points mapped to and overplotted in small scatterplot regions, we additionally support the modification of data point groups by a force-directed placement as a post processing in addition to t-SNE that can be run after the initial t-SNE algorithm is stopped. This spatial modification can be applied to each identified data point group independently which is difficult to integrate into a standard t-SNE approach. We illustrate the usefulness of our technique by applying it to formerly conducted eye tracking studies investigating the readability of public transport maps and map annotations.
  • Item
    Efficient Subsurface Scattering Simulation for Time-of-Flight Sensors
    (The Eurographics Association, 2018) Bulczak, David; Kolb, Andreas; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    Today, amplitude modulated continuouswave (AMCW) Time-of-Flight (ToF) range cameras are ubiquitous devices that are employed in many fields of application, such as robotics, automotive industry, and home entertainment. Compared to standard RGB cameras, ToF cameras suffer from various error sources related to their fundamental functional principle, such as multipath interference, motion artifacts, or subsurface scattering. Simulating ToF cameras is essential in order to improve future ToF devices or to predict their operability in specific application scenarios. In this paper we present a first simulation approach for ToF cameras that incorporates subsurface scattering effects in semi-transparent media. Subsurface scattering significantly alters the optical path length measured by the ToF camera, leading to erronous phase calculations and, eventually, to wrong range values. We address the challenge to efficiently simulate the superimposed light paths regarding intensity and phase. We address a restricted constellation, i.e., a single semi-transparent layer located on top of an opaque object. Our interactive screen-space AMCW ToF simulation technique incorporates a two-pass light scattering propagation, involving the forward and backward scattering at the interface between air and the semi-transparent object, taking amplitude and phase variations into account. We evaluate our approach by comparing our simulation results to real-world measurements.
  • Item
    Correlated Point Sampling for Geospatial Scalar Field Visualization
    (The Eurographics Association, 2018) Roveri, Riccardo; Lehmann, Dirk J.; Gross, Markus; Günther, Tobias; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    Multi-variate visualizations of geospatial data often use combinations of different visual cues, such as color and texture. For textures, different point distributions (blue noise, regular grids, etc.) can encode nominal data. In this paper, we study the suitability of point distribution interpolation to encode quantitative information. For the interpolation, we use a texture synthesis algorithm, which paves the path towards an encoding of quantitative data using points. First, we conduct a user study to perceptually linearize the transitions between uniform point distributions, including blue noise, regular grids and hexagonal grids. Based on the linearization models, we implement a point sampling-based visualization for geospatial scalar fields and we assess the accuracy of the user perception abilities by comparing the perceived transition with the transition expected from our linearized models. We illustrate our technique on several real geospatial data sets, in which users identify regions with a certain distribution. Point distributions work well in combination with color data, as they require little space and allow the user to see through to the underlying color maps. We found that interpolations between blue noise and regular grids worked perceptively best among the tested candidates.
  • Item
    Painterly Rendering using Limited Paint Color Palettes
    (The Eurographics Association, 2018) Lindemeier, Thomas; Gülzow, J. Marvin; Deussen, Oliver; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    We present a painterly rendering method for digital painting systems as well as visual feedback based painting machines that automatically extracts color palettes from images and computes mixture recipes for these from a set of real base paint colors based on the Kubelka-Munk theory. In addition, we present a new algorithm for distributing stroke candidates, which creates paintings with sharp details and contrasts. Our system is able to predict dry compositing of thinned or thick paint colors using an evaluation scheme based on example data collected from a calibration step and optical blending. We show results generated using a software stroke-based renderer and a painting machine.
  • Item
    Clustering for Stacked Edge Splatting
    (The Eurographics Association, 2018) Abdelaal, Moataz; Hlawatsch, Marcel; Burch, Michael; Weiskopf, Daniel; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    We present a time-scalable approach for visualizing dynamic graphs. By adopting bipartite graph layouts known from parallel edge splatting, individual graphs are horizontally stacked by drawing partial edges, leading to stacked edge splatting. This allows us to uncover the temporal patterns together with achieving time-scalability. To preserve the graph structural information, we introduce the representative graph where edges are aggregated and drawn at full length. The representative graph is then placed on the top of the last graph in the (sub)sequence. This allows us to obtain detailed information about the partial edges by tracing them back to the representative graph. We apply sequential temporal clustering to obtain an overview of different temporal phases of the graph sequence together with the corresponding structure for each phase. We demonstrate the effectiveness of our approach by using real-world datasets.
  • Item
    Web-based Volume Rendering using Progressive Importance-based Data Transfer
    (The Eurographics Association, 2018) Mwalongo, Finian; Krone, Michael; Reina, Guido; Ertl, Thomas; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    WebGL 2.0 makes it possible to implement efficient volume rendering that runs in browsers using 3D textures and complex fragment shaders. However, a typical bottleneck for web-based volume rendering is the size of the volumetric data sets. Transferring these data to the client for rendering can take a substantial amount of time, depending on the network speed. This can introduce latency that can in turn affect interactive rendering at the client. We address this challenge by introducing a multi-resolution bricked volume rendering, where data is transferred progressively. Similar to MIP-Mapping, the volume data is divided into multiple levels of detail. Each level of detail is broken down into bricks. The client requests the data brick by brick starting with the lowest resolution and renders each brick immediately as it is received. The 3D volume texture is updated as bricks with higher resolution are received asynchronously from the server. The advantages of this algorithm are that it reduces latency, the user can see at least a reduced-detail version of the data almost immediately, and the application always stays responsive while the data is updated. We also implemented a prioritization scheme for the bricks, where each brick can be assigned an importance value. Using this information, the client can request more important bricks first. Furthermore, we investigated the influence of data compression on the transfer and processing times.
  • Item
    Interactive Visual Exploration of Line Clusters
    (The Eurographics Association, 2018) Kanzler, Mathias; Westermann, Rüdiger; Beck, Fabian and Dachsbacher, Carsten and Sadlo, Filip
    We propose a visualization approach to interactively explore the structure of clusters of lines in 3D space. We introduce cluster consistency fields to indicate the local consistency of the lines in a cluster depending on line density and dispersion of line directions. Via brushing the user can select a focus region where lines are shown, and the consistency fields are used to automatically control the density of displayed lines according to information content. The brush is automatically continued along the gradient of the consistency field towards high information regions, or along a derived mean direction field to reveal major pathways. For a given line clustering, visualizations of cluster hulls are added to preserve context information.