VMV2023

Permanent URI for this collection

Braunschweig, Germany | September 27 –- 29, 2023
Rendering and Modelling
Digitizing Interlocking Building Blocks
Sebastian Lieb, Thorsten Thormählen, and Felix Rieger
Improving NeRF Quality by Progressive Camera Placement for Free-Viewpoint Navigation
Georgios Kopanas and George Drettakis
Ray Tracing Spherical Harmonics Glyphs
Christoph Peters, Tark Patel, Will Usher, and Chris R. Johnson
N-SfC: Robust and Fast Shape Estimation from Caustic Images
Marc Kassubeck, Moritz Kappel, Susana Castillo, and Marcus Magnor
Topology-Controlled Reconstruction from Partial Cross-Sections
Amani Shhadi and Gill Barequet
PlenopticPoints: Rasterizing Neural Feature Points for High-Quality Novel View Synthesis
Florian Hahlbohm, Moritz Kappel, Jan-Philipp Tauscher, Martin Eisemann, and Marcus Magnor
Image Visualization and Analysis
Interactions for Seamlessly Coupled Exploration of High-Dimensional Images and Hierarchical Embeddings
Alexander Vieth, Boudewijn Lelieveldt, Elmar Eisemann, Anna Vilanova, and Thomas Höllt
Perceptually Guided Automatic Parameter Optimization for Interactive Visualization
Daniel Opitz, Tobias Zirr, Carsten Dachsbacher, and Lorenzo Tessari
Neural Fields for Interactive Visualization of Statistical Dependencies in 3D Simulation Ensembles
Fatemeh Farokhmanesh, Kevin Höhlein, Christoph Neuhauser, and Rüdiger Westermann
On the Beat: Analysing and Evaluating Synchronicity in Dance Performances
Malte Menzel, Jan-Philipp Tauscher, and Marcus Magnor
Visually Analyzing Topic Change Points in Temporal Text Collections
Cedric Krause, Jonas Rieger, Jonathan Flossdorf, Carsten Jentsch, and Fabian Beck
Factors Influencing Visual Comparison of Colored Directed Acyclic Graphs
Cynthia Graniczkowska, Laura Pelchmann, Tatiana von Landesberger, and Margit Pohl
Visual-assisted Outlier Preservation for Scatterplot Sampling
Haiyan Yang and Renato Pajarola
Image Processing
Greedy Image Approximation for Artwork Generation via Contiguous Bézier Segments
Julius Nehring-Wirxel, Isaak Lim, and Leif Kobbelt
Semantic Image Abstraction using Panoptic Segmentation for Robotic Painting
Michael Stroh, Jörg-Marvin Gülzow, and Oliver Deussen
MetaISP -- Exploiting Global Scene Structure for Accurate Multi-Device Color Rendition
Matheus Souza and Wolfgang Heidrich
Video-Driven Animation of Neural Head Avatars
Wolfgang Paier, Paul Hinzer, Anna Hilsmann, and Peter Eisert
Leveraging BC6H Texture Compression and Filtering for Efficient Vector Field Visualization
Simon Oehrl, Jan Frieder Milke, Jens Koenen, Torsten W. Kuhlen, and Tim Gerrits
Optimizing Temporal Stability in Underwater Video Tone Mapping
Matthias Franz, B. Matthias Thang, Pascal Sackhoff, Timon Scholz, Jannis Möller, Steve Grogorick, and Martin Eisemann
Art-directable Stroke-based Rendering on Mobile Devices
Ronja Wagner, Sebastian Schulz, Max Reimann, Amir Semmo, Jürgen Döllner, and Matthias Trapp
Fluid Simulation and Visualization
Out-of-Core Particle Tracing for Monte Carlo Rendering of Finite-Time Lyapunov Exponents
Nicholas Grätz and Tobias Günther
Autonomous Particles for In-Situ-Friendly Flow Map Sampling
Steve Wolligant, Christian Rössl, Cheng Chi, Dominique Thévenin, and Holger Theisel
Exploring Physical Latent Spaces for High-Resolution Flow Restoration
Chloé Paliard, Nils Thuerey, and Kiwon Um
Consistent SPH Rigid-Fluid Coupling
Jan Bender, Lukas Westhofen, and Stefan Rhys Jeske
Weighted Laplacian Smoothing for Surface Reconstruction of Particle-based Fluids
Fabian Löschner, Timna Böttcher, Stefan Rhys Jeske, and Jan Bender
Uncertain Stream Lines
Janos Zimmermann, Michael Motejat, Christian Rössl, and Holger Theisel

BibTeX (VMV2023)
@inproceedings{
10.2312:vmv.20232020,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
VMV 2023: Frontmatter}},
author = {
Guthe, Michael
 and
Grosch, Thorsten
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20232020}
}
@inproceedings{
10.2312:vmv.20231221,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Digitizing Interlocking Building Blocks}},
author = {
Lieb, Sebastian
 and
Thormählen, Thorsten
 and
Rieger, Felix
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231221}
}
@inproceedings{
10.2312:vmv.20231222,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Improving NeRF Quality by Progressive Camera Placement for Free-Viewpoint Navigation}},
author = {
Kopanas, Georgios
 and
Drettakis, George
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231222}
}
@inproceedings{
10.2312:vmv.20231223,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Ray Tracing Spherical Harmonics Glyphs}},
author = {
Peters, Christoph
 and
Patel, Tark
 and
Usher, Will
 and
Johnson, Chris R.
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231223}
}
@inproceedings{
10.2312:vmv.20231224,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
N-SfC: Robust and Fast Shape Estimation from Caustic Images}},
author = {
Kassubeck, Marc
 and
Kappel, Moritz
 and
Castillo, Susana
 and
Magnor, Marcus
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231224}
}
@inproceedings{
10.2312:vmv.20231225,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Topology-Controlled Reconstruction from Partial Cross-Sections}},
author = {
Shhadi, Amani
 and
Barequet, Gill
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231225}
}
@inproceedings{
10.2312:vmv.20231226,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
PlenopticPoints: Rasterizing Neural Feature Points for High-Quality Novel View Synthesis}},
author = {
Hahlbohm, Florian
 and
Kappel, Moritz
 and
Tauscher, Jan-Philipp
 and
Eisemann, Martin
 and
Magnor, Marcus
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231226}
}
@inproceedings{
10.2312:vmv.20231227,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Interactions for Seamlessly Coupled Exploration of High-Dimensional Images and Hierarchical Embeddings}},
author = {
Vieth, Alexander
 and
Lelieveldt, Boudewijn
 and
Eisemann, Elmar
 and
Vilanova, Anna
 and
Höllt, Thomas
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231227}
}
@inproceedings{
10.2312:vmv.20231228,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Perceptually Guided Automatic Parameter Optimization for Interactive Visualization}},
author = {
Opitz, Daniel
 and
Zirr, Tobias
 and
Dachsbacher, Carsten
 and
Tessari, Lorenzo
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231228}
}
@inproceedings{
10.2312:vmv.20231229,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Neural Fields for Interactive Visualization of Statistical Dependencies in 3D Simulation Ensembles}},
author = {
Farokhmanesh, Fatemeh
 and
Höhlein, Kevin
 and
Neuhauser, Christoph
 and
Necker, Tobias
 and
Weissmann, Martin
 and
Miyoshi, Takemasa
 and
Westermann, Rüdiger
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231229}
}
@inproceedings{
10.2312:vmv.20231230,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
On the Beat: Analysing and Evaluating Synchronicity in Dance Performances}},
author = {
Menzel, Malte
 and
Tauscher, Jan-Philipp
 and
Magnor, Marcus
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231230}
}
@inproceedings{
10.2312:vmv.20231231,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Visually Analyzing Topic Change Points in Temporal Text Collections}},
author = {
Krause, Cedric
 and
Rieger, Jonas
 and
Flossdorf, Jonathan
 and
Jentsch, Carsten
 and
Beck, Fabian
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231231}
}
@inproceedings{
10.2312:vmv.20231232,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Factors Influencing Visual Comparison of Colored Directed Acyclic Graphs}},
author = {
Graniczkowska, Cynthia
 and
Pelchmann, Laura
 and
Landesberger, Tatiana von
 and
Pohl, Margit
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231232}
}
@inproceedings{
10.2312:vmv.20231233,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Visual-assisted Outlier Preservation for Scatterplot Sampling}},
author = {
Yang, Haiyan
 and
Pajarola, Renato
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231233}
}
@inproceedings{
10.2312:vmv.20231234,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Greedy Image Approximation for Artwork Generation via Contiguous Bézier Segments}},
author = {
Nehring-Wirxel, Julius
 and
Lim, Isaak
 and
Kobbelt, Leif
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231234}
}
@inproceedings{
10.2312:vmv.20231235,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Semantic Image Abstraction using Panoptic Segmentation for Robotic Painting}},
author = {
Stroh, Michael
 and
Gülzow, Jörg-Marvin
 and
Deussen, Oliver
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231235}
}
@inproceedings{
10.2312:vmv.20231236,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
MetaISP -- Exploiting Global Scene Structure for Accurate Multi-Device Color Rendition}},
author = {
Souza, Matheus
 and
Heidrich, Wolfgang
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231236}
}
@inproceedings{
10.2312:vmv.20231237,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Video-Driven Animation of Neural Head Avatars}},
author = {
Paier, Wolfgang
 and
Hinzer, Paul
 and
Hilsmann, Anna
 and
Eisert, Peter
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231237}
}
@inproceedings{
10.2312:vmv.20231238,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Leveraging BC6H Texture Compression and Filtering for Efficient Vector Field Visualization}},
author = {
Oehrl, Simon
 and
Milke, Jan Frieder
 and
Koenen, Jens
 and
Kuhlen, Torsten W.
 and
Gerrits, Tim
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231238}
}
@inproceedings{
10.2312:vmv.20231239,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Optimizing Temporal Stability in Underwater Video Tone Mapping}},
author = {
Franz, Matthias
 and
Thang, B. Matthias
 and
Sackhoff, Pascal
 and
Scholz, Timon
 and
Möller, Jannis
 and
Grogorick, Steve
 and
Eisemann, Martin
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231239}
}
@inproceedings{
10.2312:vmv.20231240,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Art-directable Stroke-based Rendering on Mobile Devices}},
author = {
Wagner, Ronja
 and
Schulz, Sebastian
 and
Reimann, Max
 and
Semmo, Amir
 and
Döllner, Jürgen
 and
Trapp, Matthias
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231240}
}
@inproceedings{
10.2312:vmv.20231241,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Out-of-Core Particle Tracing for Monte Carlo Rendering of Finite-Time Lyapunov Exponents}},
author = {
Grätz, Nicholas
 and
Günther, Tobias
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231241}
}
@inproceedings{
10.2312:vmv.20231242,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Autonomous Particles for In-Situ-Friendly Flow Map Sampling}},
author = {
Wolligant, Steve
 and
Rössl, Christian
 and
Chi, Cheng
 and
Thévenin, Dominique
 and
Theisel, Holger
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231242}
}
@inproceedings{
10.2312:vmv.20231243,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Exploring Physical Latent Spaces for High-Resolution Flow Restoration}},
author = {
Paliard, Chloé
 and
Thuerey, Nils
 and
Um, Kiwon
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231243}
}
@inproceedings{
10.2312:vmv.20231244,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Consistent SPH Rigid-Fluid Coupling}},
author = {
Bender, Jan
 and
Westhofen, Lukas
 and
Rhys Jeske, Stefan
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231244}
}
@inproceedings{
10.2312:vmv.20231245,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Weighted Laplacian Smoothing for Surface Reconstruction of Particle-based Fluids}},
author = {
Löschner, Fabian
 and
Böttcher, Timna
 and
Rhys Jeske, Stefan
 and
Bender, Jan
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231245}
}
@inproceedings{
10.2312:vmv.20231246,
booktitle = {
Vision, Modeling, and Visualization},
editor = {
Guthe, Michael
 and
Grosch, Thorsten
}, title = {{
Uncertain Stream Lines}},
author = {
Zimmermann, Janos
 and
Motejat, Michael
 and
Rössl, Christian
 and
Theisel, Holger
}, year = {
2023},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-232-5},
DOI = {
10.2312/vmv.20231246}
}

Browse

Recent Submissions

Now showing 1 - 27 of 27
  • Item
    VMV 2023: Frontmatter
    (The Eurographics Association, 2023) Guthe, Michael; Grosch, Thorsten; Guthe, Michael; Grosch, Thorsten
  • Item
    Digitizing Interlocking Building Blocks
    (The Eurographics Association, 2023) Lieb, Sebastian; Thormählen, Thorsten; Rieger, Felix; Guthe, Michael; Grosch, Thorsten
    Interlocking building blocks (such as LEGO®) are well-known toys and allow the creation of physical models of real objects or the design of imaginative 3D structures. In this paper, we propose a novel approach for digitizing building blocks with the original LEGO® form factor. We add a microprocessor to each 4 x 2 block, and the blocks communicate with each other via a two-wire connection provided in every nub. This poses the additional challenge that communication and power supply must use the same two-wire connection, which is addressed by alternating between the two modes over time. We introduce a protocol that checks for connections and propagates all connection information through the block network. We can then pass this information to a connected computer, which reconstructs the structure of the block network. We present several successfully digitized example configurations and discuss failure cases. Furthermore, two end-user scenarios are demonstrated, which show the potential of our approach as an intuitive human-computer interface.
  • Item
    Improving NeRF Quality by Progressive Camera Placement for Free-Viewpoint Navigation
    (The Eurographics Association, 2023) Kopanas, Georgios; Drettakis, George; Guthe, Michael; Grosch, Thorsten
    Neural Radiance Fields, or NeRFs, have drastically improved novel view synthesis and 3D reconstruction for rendering. NeRFs achieve impressive results on object-centric reconstructions, but the quality of novel view synthesis with free-viewpoint navigation in complex environments (rooms, houses, etc) is often problematic. While algorithmic improvements play an important role in the resulting quality of novel view synthesis, in this work, we show that because optimizing a NeRF is inherently a data-driven process, good quality data play a fundamental role in the final quality of the reconstruction. As a consequence, it is critical to choose the data samples - in this case the cameras - in a way that will eventually allow the optimization to converge to a solution that allows free-viewpoint navigation with good quality. Our main contribution is an algorithm that efficiently proposes new camera placements that improve visual quality with minimal assumptions. Our solution can be used with any NeRF model and outperforms baselines and similar work.
  • Item
    Ray Tracing Spherical Harmonics Glyphs
    (The Eurographics Association, 2023) Peters, Christoph; Patel, Tark; Usher, Will; Johnson, Chris R.; Guthe, Michael; Grosch, Thorsten
    Spherical harmonics glyphs are an established way to visualize high angular resolution diffusion imaging data. Starting from a unit sphere, each point on the surface is scaled according to the value of a linear combination of spherical harmonics basis functions. The resulting glyph visualizes an orientation distribution function. We present an efficient method to render these glyphs using ray tracing. Our method constructs a polynomial whose roots correspond to ray-glyph intersections. This polynomial has degree 2k+2 for spherical harmonics bands 0;2; : : : ; k. We then find all intersections in an efficient and numerically stable fashion through polynomial root finding. Our formulation also gives rise to a simple formula for normal vectors of the glyph. Additionally, we compute a nearly exact axis-aligned bounding box to make ray tracing of these glyphs even more efficient. Since our method finds all intersections for arbitrary rays, it lets us perform sophisticated shading and uncertainty visualization. Compared to prior work, it is faster, more flexible and more accurate.
  • Item
    N-SfC: Robust and Fast Shape Estimation from Caustic Images
    (The Eurographics Association, 2023) Kassubeck, Marc; Kappel, Moritz; Castillo, Susana; Magnor, Marcus; Guthe, Michael; Grosch, Thorsten
    This paper handles the highly challenging problem of reconstructing the shape of a refracting object from a single image of its resulting caustic. Due to the ubiquity of transparent refracting objects in everyday life, reconstruction of their shape entails a multitude of practical applications. While we focus our attention on inline shape reconstruction in glass fabrication processes, our methodology could be adapted to scenarios where the limiting factor is a lack of input measurements to constrain the reconstruction problem completely. The recent Shape from Caustics (SfC) method casts this problem as the inverse of a light propagation simulation for synthesis of the caustic image, that can be solved by a differentiable renderer. However, the inherent complexity of light transport through refracting surfaces currently limits the practical application due to reconstruction speed and robustness. Thus, we introduce Neural-Shape from Caustics (N-SfC), a learning-based extension incorporating two components into the reconstruction pipeline: a denoising module, which both alleviates the light transport simulation cost, and also helps finding a better minimum; and an optimization process based on learned gradient descent, which enables better convergence using fewer iterations. Extensive experiments demonstrate that we significantly outperform the current state-of-the-art in both computational speed and final surface error.
  • Item
    Topology-Controlled Reconstruction from Partial Cross-Sections
    (The Eurographics Association, 2023) Shhadi, Amani; Barequet, Gill; Guthe, Michael; Grosch, Thorsten
    The problem of 3-dimensional reconstruction from planar cross-sections arises in many fields, such as biomedical image analysis and geographical information systems. The problem has been studied extensively in the past 40 years. Each cross-section in the input contains multiple contours, where each contour divides the plane into different material types. The reconstructed object is a valid volume (surrounded by a closed surface) that interpolates the input slices. Some previous works utilize prior information about the reconstructed object, such as its topology, for recovering the original shape of the object. These works assume that the input cross-sections are complete and do not contain areas of missing information. In many real-life cases, this assumption does not hold. Other existing works handle such inputs; however, the methods they suggest do not have topological guarantees for the reconstructed object. In this work, we provide the first technique that provides topology control for 3-dimensional reconstruction from partial planar cross-sections. The input to our algorithm consists of an arbitrarily-oriented set of 2-dimensional cross-sections that may contain areas of missing information (''unknown'' regions) and user-specified topology constraints on the reconstructed object. During the reconstruction process, we explore a set of distinct topologies for relabeling the ''unknown'' regions. We define a scoring function for calculating the likelihood of each topology. We then examine a set of representative topologies and choose the reconstruction that simultaneously satisfies the global topology and optimizes the scoring function.
  • Item
    PlenopticPoints: Rasterizing Neural Feature Points for High-Quality Novel View Synthesis
    (The Eurographics Association, 2023) Hahlbohm, Florian; Kappel, Moritz; Tauscher, Jan-Philipp; Eisemann, Martin; Magnor, Marcus; Guthe, Michael; Grosch, Thorsten
    This paper presents a point-based, neural rendering approach for complex real-world objects from a set of photographs. Our method is specifically geared towards representing fine detail and reflective surface characteristics at improved quality over current state-of-the-art methods. From the photographs, we create a 3D point model based on optimized neural feature points located on a regular grid. For rendering, we employ view-dependent spherical harmonics shading, differentiable rasterization, and a deep neural rendering network. By combining a point-based approach and novel regularizers, our method is able to accurately represent local detail such as fine geometry and high-frequency texture while at the same time convincingly interpolating unseen viewpoints during inference. Our method achieves about 7 frames per second at 800×800 pixel output resolution on commodity hardware, putting it within reach for real-time rendering applications.
  • Item
    Interactions for Seamlessly Coupled Exploration of High-Dimensional Images and Hierarchical Embeddings
    (The Eurographics Association, 2023) Vieth, Alexander; Lelieveldt, Boudewijn; Eisemann, Elmar; Vilanova, Anna; Höllt, Thomas; Guthe, Michael; Grosch, Thorsten
    High-dimensional images (i.e., with many attributes per pixel) are commonly acquired in many domains, such as geosciences or systems biology. The spatial and attribute information of such data are typically explored separately, e.g., by using coordinated views of an image representation and a low-dimensional embedding of the high-dimensional attribute data. Facing ever growing image data sets, hierarchical dimensionality reduction techniques lend themselves to overcome scalability issues. However, current embedding methods do not provide suitable interactions to reflect image space exploration. Specifically, it is not possible to adjust the level of detail in the embedding hierarchy to reflect changing level of detail in image space stemming from navigation such as zooming and panning. In this paper, we propose such a mapping from image navigation interactions to embedding space adjustments. We show how our mapping applies the "overview first, details-on-demand" characteristic inherent to image exploration in the high-dimensional attribute space. We compare our strategy with regular hierarchical embedding technique interactions and demonstrate the advantages of linking image and embedding interactions through a representative use case.
  • Item
    Perceptually Guided Automatic Parameter Optimization for Interactive Visualization
    (The Eurographics Association, 2023) Opitz, Daniel; Zirr, Tobias; Dachsbacher, Carsten; Tessari, Lorenzo; Guthe, Michael; Grosch, Thorsten
    We propose a new reference-free method for automatically optimizing the parameters of visualization techniques such that the perception of visual structures is improved. Manual tuning may require domain knowledge not only in the field of the analyzed data, but also deep knowledge of the visualization techniques, and thus often becomes challenging as the number of parameters that impact the result grows. To avoid this laborious and difficult task, we first derive an image metric that models the loss of perceived information in the processing of a displayed image by a human observer; good visualization parameters minimize this metric. Our model is loosely based on quantitative studies in the fields of perception and biology covering visual masking, photo receptor sensitivity, and local adaptation. We then pair our metric with a generic parameter tuning algorithm to arrive at an automatic optimization method that is oblivious to the concrete relationship between parameter sets and visualization. We demonstrate our method for several volume visualization techniques, where visual clutter, visibility of features, and illumination are often hard to balance. Since the metric can be efficiently computed using image transformations, it can be applied to many visualization techniques and problem settings in a unified manner, including continuous optimization during interactive visual exploration. We also evaluate the effectiveness of our approach in a user study that validates the improved perception of visual features in results optimized using our model of perception.
  • Item
    Neural Fields for Interactive Visualization of Statistical Dependencies in 3D Simulation Ensembles
    (The Eurographics Association, 2023) Farokhmanesh, Fatemeh; Höhlein, Kevin; Neuhauser, Christoph; Necker, Tobias; Weissmann, Martin; Miyoshi, Takemasa; Westermann, Rüdiger; Guthe, Michael; Grosch, Thorsten
    We present neural dependence fields (NDFs) - the first neural network that learns to compactly represent and efficiently reconstruct the statistical dependencies between the values of physical variables at different spatial locations in large 3D simulation ensembles. Going beyond linear dependencies, we consider mutual information as an exemplary measure of non-linear dependence. We demonstrate learning and reconstruction with a large weather forecast ensemble comprising 1000 members, each storing multiple physical variables at a 250×352×20 simulation grid. By circumventing compute-intensive statistical estimators at runtime, we demonstrate significantly reduced memory and computation requirements for reconstructing the major dependence structures. This enables embedding the estimator into a GPU-accelerated direct volume renderer and interactively visualizing all mutual dependencies for a selected domain point.
  • Item
    On the Beat: Analysing and Evaluating Synchronicity in Dance Performances
    (The Eurographics Association, 2023) Menzel, Malte; Tauscher, Jan-Philipp; Magnor, Marcus; Guthe, Michael; Grosch, Thorsten
    This paper presents a method to analyse and evaluate synchronicity in dance performances automatically. Synchronisation of a dancer's movement and the accompanying music is a vital characteristic of dance performances. We propose a method that fuses computer vision-based extraction of dancers' body pose information and audio beat tracking to examine the alignment of the dance motions with the background music. Specifically, the motion of the dancer is analysed for rhythmic dance movements that are then subsequently correlated to the musical beats of the soundtrack played during the performance. Using a single mobile phone video recording of a dance performance only, our system is easily usable in dance rehearsal contexts. Our method evaluates accuracy for every motion beat of the performance on a timeline giving users detailed insight into their performance. We evaluated the accuracy of our method using a dataset containing 17 video recordings of real world dance performances. Our results closely match assessments by professional dancers, indicating correct analysis by our method.
  • Item
    Visually Analyzing Topic Change Points in Temporal Text Collections
    (The Eurographics Association, 2023) Krause, Cedric; Rieger, Jonas; Flossdorf, Jonathan; Jentsch, Carsten; Beck, Fabian; Guthe, Michael; Grosch, Thorsten
    Texts are collected over time and reflect temporal changes in the themes that they cover. While some changes might slowly evolve, other changes abruptly surface as explicit change points. In an application study for a change point extraction method based on a rolling Latent Dirichlet Allocation (LDA), we have developed a visualization approach that allows exploring such change points and related change patterns. Our visualization not only provides an overview of topics, but supports the detailed exploration of temporal developments. The interplay of general topic contents, development, and similarities with detected change points reveals rich insights into different kinds of change patterns. The approach comprises a combination of views including topic timeline representations with detected change points, comparative word clouds, and temporal similarity matrices. In an interactive exploration, these views adapt to selected topics, words, or points in time. We demonstrate the use cases of our approach in an in-depth application example involving statisticians.
  • Item
    Factors Influencing Visual Comparison of Colored Directed Acyclic Graphs
    (The Eurographics Association, 2023) Graniczkowska, Cynthia; Pelchmann, Laura; Landesberger, Tatiana von; Pohl, Margit; Guthe, Michael; Grosch, Thorsten
    This paper presents a comprehensive investigation of the factors that influence visual comparison in colored node-link diagrams. We conducted a user study in which participants were asked to identify differences in pairs of directed acyclic graphs (DAGs) under time constraints. Previous studies focused on the perception of differences in node-link diagrams without coloring. Our results show that the individual coloring of nodes and edges significantly affects the detection of differences. We were able to confirm previous results, such as the influence of graph density, and also found that uniform coloring in certain areas of the graphs plays an important role in finding differences. Consequently, the results of this study hold potential for developing better comparative visualizations for diverse applications, such as finance or biology.
  • Item
    Visual-assisted Outlier Preservation for Scatterplot Sampling
    (The Eurographics Association, 2023) Yang, Haiyan; Pajarola, Renato; Guthe, Michael; Grosch, Thorsten
    Scatterplot sampling has long been an efficient and effective way to resolve the overplotting issues commonly occurring in large-scale scatterplot visualization applications. However, it is challenging to preserve the existence of low-density points or outliers after sampling for a sub-sampling algorithm if, at the same time, faithfully representing the relative data densities is of importance. In this work, we propose to address this issue in a visual-assisted manner. While the whole dataset is sub-sampled, the density of the outliers is modeled and visually integrated into the final scatterplot together with the sub-sampled point data. We showcase the effectiveness of our proposed method in various cases and user studies.
  • Item
    Greedy Image Approximation for Artwork Generation via Contiguous Bézier Segments
    (The Eurographics Association, 2023) Nehring-Wirxel, Julius; Lim, Isaak; Kobbelt, Leif; Guthe, Michael; Grosch, Thorsten
    The automatic creation of digital art has a long history in computer graphics. In this work, we focus on approximating input images to mimic artwork by the artist Kumi Yamashita, as well as the popular scribble art style. Both have in common that the artists create the works by using a single, contiguous thread (Yamashita) or stroke (scribble) that is placed seemingly at random when viewed at close range, but perceived as a tone-mapped picture when viewed from a distance. Our approach takes a rasterized image as input and creates a single, connected path by iteratively sampling a set of candidate segments that extend the current path and greedily selecting the best one. The candidates are sampled according to art style specific constraints, i.e. conforming to continuity constraints in the mathematical sense for the scribble art style. To model the perceptual discrepancy between close and far viewing distances, we minimize the difference between the input image and the image created by rasterizing our path after applying the contrast sensitivity function, which models how human vision blurs images when viewed from a distance. Our approach generalizes to colored images by using one path per color. We evaluate our approach on a wide range of input images and show that it is able to achieve good results for both art styles in grayscale and color.
  • Item
    Semantic Image Abstraction using Panoptic Segmentation for Robotic Painting
    (The Eurographics Association, 2023) Stroh, Michael; Gülzow, Jörg-Marvin; Deussen, Oliver; Guthe, Michael; Grosch, Thorsten
    We propose a comprehensive pipeline for generating adaptable image abstractions from input pictures, tailored explicitly for robotic painting tasks. Our pipeline addresses several key objectives, including the ability to paint from background to foreground, maintain fine details, capture structured regions accurately, and highlight important objects. To achieve this, we employ a panoptic segmentation network to predict the semantic class membership for each pixel in the image. This step provides us with a detailed understanding of the object categories present in the scene. Building upon the semantic segmentation results, we combine them with a color-based image over-segmentation technique. This process partitions the image into monochromatic regions, each corresponding to a specific semantic object. Next, we construct a hierarchical tree based on the segmentation results, which allows us to merge adjacent regions based on their color difference and semantic class. We take care to ensure that shapes belonging to different semantic objects are not merged together. We iteratively perform adjacency merging until no further combinations are possible, resulting in a refined hierarchical shape tree. To obtain the desired image abstraction, we filter the hierarchical shape tree by examining factors such as color differences, relative sizes, and the layering within the hierarchy of each region in relation to their parent regions. By employing this approach, we can preserve fine details, apply local filtering operations, and effectively combine regions with structured shapes. This results in image abstractions well-suited for robotic painting applications and artistic renderings.
  • Item
    MetaISP -- Exploiting Global Scene Structure for Accurate Multi-Device Color Rendition
    (The Eurographics Association, 2023) Souza, Matheus; Heidrich, Wolfgang; Guthe, Michael; Grosch, Thorsten
    Image signal processors (ISPs) are historically grown legacy software systems for reconstructing color images from noisy raw sensor measurements. Each smartphone manufacturer has developed its ISPs with its own characteristic heuristics for improving the color rendition, for example, skin tones and other visually essential colors. The recent interest in replacing the historically grown ISP systems with deep-learned pipelines to match DSLR's image quality improves structural features in the image. However, these works ignore the superior color processing based on semantic scene analysis that distinguishes mobile phone ISPs from DSLRs. Here we present MetaISP, a single model designed to learn how to translate between the color and local contrast characteristics of different devices. MetaISP takes the RAW image from device A as input and translates it to RGB images that inherit the appearance characteristics of devices A, B, and C. We achieve this result by employing a lightweight deep learning technique that conditions its output appearance based on the device of interest. In this approach, we leverage novel attention mechanisms inspired by cross-covariance learn global scene semantics. Additionally, we make use of metadata that typically accompanies raw images, and we estimate scene illuminants when they are not available.
  • Item
    Video-Driven Animation of Neural Head Avatars
    (The Eurographics Association, 2023) Paier, Wolfgang; Hinzer, Paul; Hilsmann, Anna; Eisert, Peter; Guthe, Michael; Grosch, Thorsten
    We present a new approach for video-driven animation of high-quality neural 3D head models, addressing the challenge of person-independent animation from video input. Typically, high-quality generative models are learned for specific individuals from multi-view video footage, resulting in person-specific latent representations that drive the generation process. In order to achieve person-independent animation from video input, we introduce an LSTM-based animation network capable of translating person-independent expression features into personalized animation parameters of person-specific 3D head models. Our approach combines the advantages of personalized head models (high quality and realism) with the convenience of video-driven animation employing multi-person facial performance capture.We demonstrate the effectiveness of our approach on synthesized animations with high quality based on different source videos as well as an ablation study.
  • Item
    Leveraging BC6H Texture Compression and Filtering for Efficient Vector Field Visualization
    (The Eurographics Association, 2023) Oehrl, Simon; Milke, Jan Frieder; Koenen, Jens; Kuhlen, Torsten W.; Gerrits, Tim; Guthe, Michael; Grosch, Thorsten
    The steady advance of compute hardware is accompanied by an ever-steeper amount of data to be processed for visualization. Limited memory bandwidth provides a significant bottleneck to the runtime performance of visualization algorithms while limited video memory requires complex out-of-core loading techniques for rendering large datasets. Data compression methods aim to overcome these limitations, potentially at the cost of information loss. This work presents an approach to the compression of large data for flow visualization using the BC6H texture compression format natively supported, and therefore effortlessly leverageable, on modern GPUs. We assess the performance and accuracy of BC6H for compression of steady and unsteady vector fields and investigate its applicability to particle advection. The results indicate an improvement in memory utilization as well as runtime performance, at a cost of moderate loss in precision.
  • Item
    Optimizing Temporal Stability in Underwater Video Tone Mapping
    (The Eurographics Association, 2023) Franz, Matthias; Thang, B. Matthias; Sackhoff, Pascal; Scholz, Timon; Möller, Jannis; Grogorick, Steve; Eisemann, Martin; Guthe, Michael; Grosch, Thorsten
    In this paper, we present an approach for temporal stabilization of depth-based underwater image tone mapping methods for application to monocular RGB video. Typically, the goal is to improve the colors of focused objects, while leaving more distant regions nearly unchanged, to preserve the underwater look-and-feel of the overall image. To do this, many methods rely on estimated depth to control the recolorization process, i.e., to enhance colors (reduce blue tint) only for objects close to the camera. However, while single-view depth estimation is usually consistent within a frame, it often suffers from inconsistencies across sequential frames, resulting in color fluctuations during tone mapping. We propose a simple yet effective inter-frame stabilization of the computed depth maps to achieve stable tone mapping results. The evaluation of eight test sequences shows the effectiveness in a wide range of underwater scenarios.
  • Item
    Art-directable Stroke-based Rendering on Mobile Devices
    (The Eurographics Association, 2023) Wagner, Ronja; Schulz, Sebastian; Reimann, Max; Semmo, Amir; Döllner, Jürgen; Trapp, Matthias; Guthe, Michael; Grosch, Thorsten
    This paper introduces an art-directable stroke-based rendering technique for transforming photos into painterly renditions on mobile devices. Unlike previous approaches that rely on time-consuming iterative computations and explicit brush-stroke geometry, our method offers a interactive image-based implementation tailored to the capabilities of modern mobile devices. The technique places curved brush strokes in multiple passes, leveraging a texture bombing algorithm. To maintain and highlight essential details for stylization, we incorporate additional information such as image salience, depth, and facial landmarks as parameters. Our technique enables a user to control and manipulate using a wide range of parameters and masks during editing to adjust and refine the stylized image. The result is an interactive painterly stylization tool that supports high-resolution input images, providing users with an immersive and engaging artistic experience on their mobile devices.
  • Item
    Out-of-Core Particle Tracing for Monte Carlo Rendering of Finite-Time Lyapunov Exponents
    (The Eurographics Association, 2023) Grätz, Nicholas; Günther, Tobias; Guthe, Michael; Grosch, Thorsten
    The motion in time-dependent fluid flows is governed by Lagrangian coherent structures (LCS). One common approach to visualize hyperbolic LCS is to extract and visualize the finite-time Lyapunov exponent. Its visualization on large time-dependent fluid flow is challenging for two reasons. First, the time steps needed for particle tracing do not necessarily fit at once into memory. And second, conventional ray marching exhibits artifacts when the FTLE ridges are sharp, which instead requires Monte Carlo volume rendering techniques to produce unbiased results. So far, these two problems have only been looked at in isolation. In this paper, we implement the first out-of-core Monte Carlo FTLE tracer, which is able to visualize the finitetime Lyapunov exponent field of time-dependent fluid flows that do not fit into main memory at once. To achieve this, we designed a data processing pipeline that alternates between two phases: a photon tracing phase and a particle tracing phase. We demonstrate and evaluate the approach on several large time-dependent vector fields.
  • Item
    Autonomous Particles for In-Situ-Friendly Flow Map Sampling
    (The Eurographics Association, 2023) Wolligant, Steve; Rössl, Christian; Chi, Cheng; Thévenin, Dominique; Theisel, Holger; Guthe, Michael; Grosch, Thorsten
    Computing and storing flow maps is a common approach to processing and analyzing large flow simulations in a Lagrangian way. Accurate Lagrangian-based visualizations require a good sampling of the flow map. We present an In-Situ-friendly flow map sampling strategy for flows using Autonomous Particles that do not need information of neighboring particles: they can be advected individually without knowing about each other. The main idea is to observe a linear neighborhood of a particle during advection. As soon as the neighborhood cannot be considered linear anymore, an adaptive splitting is performed. For observing the linear neighborhood, each particle is equipped with an ellipsoid that also gets advected by the flow. By splitting these ellipsoids into smaller ones in regions of non-linear behavior, critical and more interesting regions of the flow map are more densely sampled. Our sampling approach uses only forward integration and no adaptive integration from the past. This makes it applicable in and well-suited for in In-Situ environments. We compare our approach to existing sampling techniques and apply it to several artificial and real data sets.
  • Item
    Exploring Physical Latent Spaces for High-Resolution Flow Restoration
    (The Eurographics Association, 2023) Paliard, Chloé; Thuerey, Nils; Um, Kiwon; Guthe, Michael; Grosch, Thorsten
    We explore training deep neural network models in conjunction with physics simulations via partial differential equations (PDEs), using the simulated degrees of freedom as latent space for a neural network. In contrast to previous work, this paper treats the degrees of freedom of the simulated space purely as tools to be used by the neural network. We demonstrate this concept for learning reduced representations, as it is extremely challenging to faithfully preserve correct solutions over long time-spans with traditional reduced representations, particularly for solutions with large amounts of small scale features. This work focuses on the use of such physical, reduced latent space for the restoration of fine simulations, by training models that can modify the content of the reduced physical states as much as needed to best satisfy the learning objective. This autonomy allows the neural networks to discover alternate dynamics that significantly improve the performance in the given tasks. We demonstrate this concept for various fluid flows ranging from different turbulence scenarios to rising smoke plumes.
  • Item
    Consistent SPH Rigid-Fluid Coupling
    (The Eurographics Association, 2023) Bender, Jan; Westhofen, Lukas; Rhys Jeske, Stefan; Guthe, Michael; Grosch, Thorsten
    A common way to handle boundaries in SPH fluid simulations is to sample the surface of the boundary geometry using particles. These boundary particles are assigned the same properties as the fluid particles and are considered in the pressure force computation to avoid a penetration of the boundary. However, the pressure solver requires a pressure value for each particle. These are typically not computed for the boundary particles due to the computational overhead. Therefore, several strategies have been investigated in previous works to obtain boundary pressure values. A popular, simple technique is pressure mirroring, which mirrors the values from the fluid particles. This method is efficient, but may cause visual artifacts. More complex approaches like pressure extrapolation aim to avoid these artifacts at the cost of computation time. We introduce a constraint-based derivation of Divergence-Free SPH (DFSPH) - a common state-of-the-art pressure solver. This derivation gives us new insights on how to integrate boundary particles in the pressure solve without the need of explicitly computing boundary pressure values. This yields a more elegant formulation of the pressure solver that avoids the aforementioned problems.
  • Item
    Weighted Laplacian Smoothing for Surface Reconstruction of Particle-based Fluids
    (The Eurographics Association, 2023) Löschner, Fabian; Böttcher, Timna; Rhys Jeske, Stefan; Bender, Jan; Guthe, Michael; Grosch, Thorsten
    In physically-based animation, producing detailed and realistic surface reconstructions for rendering is an important part of a simulation pipeline for particle-based fluids. In this paper we propose a post-processing approach to obtain smooth surfaces from ''blobby'' marching cubes triangulations without visual volume loss or shrinkage of drops and splashes. In contrast to other state-of-the-art methods that often require changes to the entire reconstruction pipeline our approach is easy to implement and less computationally expensive. The main component is Laplacian mesh smoothing with our proposed feature weights that dampen the smoothing in regions of the mesh with splashes and isolated particles without reducing effectiveness in regions that are supposed to be flat. In addition, we suggest a specialized decimation procedure to avoid artifacts due to low-quality triangle configurations generated by marching cubes and a normal smoothing pass to further increase quality when visualizing the mesh with physically-based rendering. For improved computational efficiency of the method, we outline the option of integrating computation of our weights into an existing reconstruction pipeline as most involved quantities are already known during reconstruction. Finally, we evaluate our post-processing implementation on high-resolution smoothed particle hydrodynamics (SPH) simulations.
  • Item
    Uncertain Stream Lines
    (The Eurographics Association, 2023) Zimmermann, Janos; Motejat, Michael; Rössl, Christian; Theisel, Holger; Guthe, Michael; Grosch, Thorsten
    We present a new approach for the visual representation of uncertain stream lines in vector field ensembles. While existing approaches rely on a particular seed point for the analysis of uncertain streamlines, our approach considers a whole stream line as seed structure. With this we ensure that uncertain stream lines are independent of the particular choice of seed point, and that uncertain stream lines have the same dimensionality as their certain counterparts in a single vector field. Assuming a Gaussian distribution of stream lines, we provide a visual representation of uncertain stream lines based on a mean map and a covariance map. The extension to uncertain path lines in ensembles of time-dependent vector fields is straightforward and is also introduced in the paper. We analyze properties, discuss discretization and performance issues, and apply the new technique to a number of flows ensembles.