Browsing by Author "Berger, Matthew"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Compressive Neural Representations of Volumetric Scalar Fields(The Eurographics Association and John Wiley & Sons Ltd., 2021) Lu, Yuzhe; Jiang, Kairong; Levine, Joshua A.; Berger, Matthew; Borgo, Rita and Marai, G. Elisabeta and Landesberger, Tatiana vonWe present an approach for compressing volumetric scalar fields using implicit neural representations. Our approach represents a scalar field as a learned function, wherein a neural network maps a point in the domain to an output scalar value. By setting the number of weights of the neural network to be smaller than the input size, we achieve compressed representations of scalar fields, thus framing compression as a type of function approximation. Combined with carefully quantizing network weights, we show that this approach yields highly compact representations that outperform state-of-the-art volume compression approaches. The conceptual simplicity of our approach enables a number of benefits, such as support for time-varying scalar fields, optimizing to preserve spatial gradients, and random-access field evaluation. We study the impact of network design choices on compression performance, highlighting how simple network architectures are effective for a broad range of volumes.Item Integration-Aware Vector Field Super Resolution(The Eurographics Association, 2021) Sahoo, Saroj; Berger, Matthew; Agus, Marco and Garth, Christoph and Kerren, AndreasIn this work we propose an integration-aware super-resolution approach for 3D vector fields. Recent work in flow field superresolution has achieved remarkable success using deep learning approaches. However, existing approaches fail to account for how vector fields are used in practice, once an upsampled vector field is obtained. Specifically, a cornerstone of flow visualization is the visual analysis of streamlines, or integral curves of the vector field. To this end, we study how to incorporate streamlines as part of super-resolution in a deep learning context, such that upsampled vector fields are optimized to produce streamlines that resemble the ground truth upon integration. We consider common factors of integration as part of our approach - seeding, streamline length - and how these factors impact the resulting upsampled vector field. To demonstrate the effectiveness of our approach, we evaluate our model both quantitatively and qualitatively on different flow field datasets and compare our method against state of the art techniques.Item Interactively Assessing Disentanglement in GANs(The Eurographics Association and John Wiley & Sons Ltd., 2022) Jeong, Sangwon; Liu, Shusen; Berger, Matthew; Borgo, Rita; Marai, G. Elisabeta; Schreck, TobiasGenerative adversarial networks (GAN) have witnessed tremendous growth in recent years, demonstrating wide applicability in many domains. However, GANs remain notoriously difficult for people to interpret, particularly for modern GANs capable of generating photo-realistic imagery. In this work we contribute a visual analytics approach for GAN interpretability, where we focus on the analysis and visualization of GAN disentanglement. Disentanglement is concerned with the ability to control content produced by a GAN along a small number of distinct, yet semantic, factors of variation. The goal of our approach is to shed insight on GAN disentanglement, above and beyond coarse summaries, instead permitting a deeper analysis of the data distribution modeled by a GAN. Our visualization allows one to assess a single factor of variation in terms of groupings and trends in the data distribution, where our analysis seeks to relate the learned representation space of GANs with attribute-based semantic scoring of images produced by GANs. Through use-cases, we show that our visualization is effective in assessing disentanglement, allowing one to quickly recognize a factor of variation and its overall quality. In addition, we show how our approach can highlight potential dataset biases learned by GANs.Item Neural Flow Map Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2022) Sahoo, Saroj; Lu, Yuzhe; Berger, Matthew; Borgo, Rita; Marai, G. Elisabeta; Schreck, TobiasIn this paper we present a reconstruction technique for the reduction of unsteady flow data based on neural representations of time-varying vector fields. Our approach is motivated by the large amount of data typically generated in numerical simulations, and in turn the types of data that domain scientists can generate in situ that are compact, yet useful, for post hoc analysis. One type of data commonly acquired during simulation are samples of the flow map, where a single sample is the result of integrating the underlying vector field for a specified time duration. In our work, we treat a collection of flow map samples for a single dataset as a meaningful, compact, and yet incomplete, representation of unsteady flow, and our central objective is to find a representation that enables us to best recover arbitrary flow map samples. To this end, we introduce a technique for learning implicit neural representations of time-varying vector fields that are specifically optimized to reproduce flow map samples sparsely covering the spatiotemporal domain of the data. We show that, despite aggressive data reduction, our optimization problem - learning a function-space neural network to reproduce flow map samples under a fixed integration scheme - leads to representations that demonstrate strong generalization, both in the field itself, and using the field to approximate the flow map. Through quantitative and qualitative analysis across different datasets we show that our approach is an improvement across a variety of data reduction methods, and across a variety of measures ranging from improved vector fields, flow maps, and features derived from the flow map.