37-Issue 1
Permanent URI for this collection
Browse
Browsing 37-Issue 1 by Title
Now showing 1 - 20 of 34
Results Per Page
Sort Options
Item 2018 Cover Image: Thingi10K(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Zhou, Qingnan; Jacobson, Alec; Chen, Min and Benes, BedrichItem Application‐Specific Tone Mapping Via Genetic Programming(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Debattista, K.; Chen, Min and Benes, BedrichHigh dynamic range (HDR) imagery permits the manipulation of real‐world data distinct from the limitations of the traditional, low dynamic range (LDR), content. The process of retargeting HDR content to traditional LDR imagery via tone mapping operators (TMOs) is useful for visualizing HDR content on traditional displays, supporting backwards‐compatible HDR compression and, more recently, is being frequently used for input into a wide variety of computer vision applications. This work presents the automatic generation of TMOs for specific applications via the evolutionary computing method of genetic programming (GP). A straightforward, generic GP method that generates TMOs for a given fitness function and HDR content is presented. Its efficacy is demonstrated in the context of three applications: Visualization of HDR content on LDR displays, feature mapping and compression. For these applications, results show good performance for the generated TMOs when compared to traditional methods. Furthermore, they demonstrate that the method is generalizable and could be used across various applications that require TMOs but for which dedicated successful TMOs have not yet been discovered. High dynamic range (HDR) imagery permits the manipulation of real‐world data distinct from the limitations of the traditional, low dynamic range (LDR), content. The process of retargeting HDR content to traditional LDR imagery via tone mapping operators (TMOs) is useful for visualizing HDR content on traditional displays, supporting backwards‐compatible HDR compression and, more recently, is being frequently used for input into a wide variety of computer vision applications. This work presents the automatic generation of TMOs for specific applications via the evolutionary computing method of genetic programming (GP).Item ARAPLBS: Robust and Efficient Elasticity‐Based Optimization of Weights and Skeleton Joints for Linear Blend Skinning with Parametrized Bones(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Thiery, J.‐M.; Eisemann, E.; Chen, Min and Benes, BedrichWe present a fast, robust and high‐quality technique to skin a mesh with reference to a skeleton. We consider the space of possible skeleton deformations (based on skeletal constraints, or skeletal animations), and compute skinning weights based on an optimization scheme to obtain as‐rigid‐as‐possible (ARAP) corresponding mesh deformations. We support stretchable‐and‐twistable bones (STBs) and spines by generalizing the ARAP deformations to stretchable deformers. In addition, our approach can optimize joint placements. If wanted, a user can guide and interact with the results, which is facilitated by an interactive feedback, reached via an efficient sparsification scheme. We demonstrate our technique on challenging inputs (STBs and spines, triangle and tetrahedral meshes featuring missing elements, boundaries, self‐intersections or wire edges).We present a fast, robust and high‐quality technique to skin a mesh with reference to a skeleton. We consider the space of possible skeleton deformations (based on skeletal constraints, or skeletal animations), and compute skinning weights based on an optimization scheme to obtain as‐rigid‐as‐possible (ARAP) corresponding mesh deformations. We support stretchable‐and‐twistable bones (STBs) and spines by generalizing the ARAP deformations to stretchable deformers. In addition, our approach can optimize joint placements. If wanted, a user can guide and interact with the results, which is facilitated by an interactive feedback, reached via an efficient sparsification scheme. We demonstrate our technique on challenging inputs (STBs and spines, triangle and tetrahedral meshes featuring missing elements, boundaries, self‐intersections or wire edges).Item Audiovisual Resource Allocation for Bimodal Virtual Environments(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Doukakis, E.; Debattista, K.; Harvey, C.; Bashford‐Rogers, T.; Chalmers, A.; Chen, Min and Benes, BedrichFidelity is of key importance if virtual environments are to be used as authentic representations of real environments. However, simulating the multitude of senses that comprise the human sensory system is computationally challenging. With limited computational resources, it is essential to distribute these carefully in order to simulate the most ideal perceptual experience. This paper investigates this balance of resources across multiple scenarios where combined audiovisual stimulation is delivered to the user. A subjective experiment was undertaken where participants (N=35) allocated five fixed resource budgets across graphics and acoustic stimuli. In the experiment, increasing the quality of one of the stimuli decreased the quality of the other. Findings demonstrate that participants allocate more resources to graphics; however, as the computational budget is increased, an approximately balanced distribution of resources is preferred between graphics and acoustics. Based on the results, an audiovisual quality prediction model is proposed and successfully validated against previously untested budgets and an untested scenario.Fidelity is of key importance if virtual environments are to be used as authentic representations of real environments. However, simulating the multitude of senses that comprise the human sensory system is computationally challenging. With limited computational resources, it is essential to distribute these carefully in order to simulate the most ideal perceptual experience. This paper investigates this balance of resources across multiple scenarios where combined audiovisual stimulation is delivered to the user. A subjective experiment was undertaken where participants (N=35) allocated five fixed resource budgets across graphics and acoustic stimuli.Item CLUST: Simulating Realistic Crowd Behaviour by Mining Pattern from Crowd Videos(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Zhao, M.; Cai, W.; Turner, S. J.; Chen, Min and Benes, BedrichIn this paper, we present a data‐driven approach to simulate realistic locomotion of virtual pedestrians. We focus on simulating low‐level pedestrians' motion, where a pedestrian's motion is mainly affected by other pedestrians and static obstacles nearby, and the preferred velocities of agents (direction and speed) are obtained from higher level path planning models. Before the simulation, collision avoidance processes (i.e. examples) are extracted from videos to describe how pedestrians avoid collisions, which are then clustered using hierarchical clustering algorithm with a novel distance function to find similar patterns of pedestrians' collision avoidance behaviours. During the simulation, at each time step, the perceived state of each agent is classified into one cluster using a neural network trained before the simulation. A sequence of velocity vectors, representing the agent's future motion, is selected among the examples corresponding to the chosen cluster. The proposed CLUST model is trained and applied to different real‐world datasets to evaluate its generality and effectiveness both qualitatively and quantitatively. The simulation results demonstrate that the proposed model can generate realistic crowd behaviours with comparable computational cost.In this paper, we present a data‐driven approach to simulate realistic locomotion of virtual pedestrians. We focus on simulating low‐level pedestrians' motion, where a pedestrian's motion is mainly affected by other pedestrians and static obstacles nearby, and the preferred velocities of agents (direction and speed) are obtained from higher level path planning models. Before the simulation, collision avoidance processes (i.e. examples) are extracted from videos to describe how pedestrians avoid collisions, which are then clustered using hierarchical clustering algorithm with a novel distance function to find similar patterns of pedestrians' collision avoidance behaviours. During the simulation, at each time step, the perceived state of each agent is classified into one cluster using a neural network trained before the simulation. A sequence of velocity vectors, representing the agent's future motion, is selected among the examples corresponding to the chosen cluster.Item CorrelatedMultiples: Spatially Coherent Small Multiples With Constrained Multi‐Dimensional Scaling(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Liu, Xiaotong; Hu, Yifan; North, Stephen; Shen, Han‐Wei; Chen, Min and Benes, BedrichDisplaying small multiples is a popular method for visually summarizing and comparing multiple facets of a complex data set. If the correlations between the data are not considered when displaying the multiples, searching and comparing specific items become more difficult since a sequential scan of the display is often required. To address this issue, we introduce CorrelatedMultiples, a spatially coherent visualization based on small multiples, where the items are placed so that the distances reflect their dissimilarities. We propose a constrained multi‐dimensional scaling (CMDS) solver that preserves spatial proximity while forcing the items to remain within a fixed region. We evaluate the effectiveness of our approach by comparing CMDS with other competing methods through a controlled user study and a quantitative study, and demonstrate the usefulness of CorrelatedMultiples for visual search and comparison in three real‐world case studies.Item CPU–GPU Parallel Framework for Real‐Time Interactive Cutting of Adaptive Octree‐Based Deformable Objects(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Jia, Shiyu; Zhang, Weizhong; Yu, Xiaokang; Pan, Zhenkuan; Chen, Min and Benes, BedrichA software framework taking advantage of parallel processing capabilities of CPUs and GPUs is designed for the real‐time interactive cutting simulation of deformable objects. Deformable objects are modelled as voxels connected by links. The voxels are embedded in an octree mesh used for deformation. Cutting is performed by disconnecting links swept by the cutting tool and then adaptively refining octree elements near the cutting tool trajectory. A surface mesh used for visual display is reconstructed from disconnected links using the dual contour method. Spatial hashing of the octree mesh and topology‐aware interpolation of distance field are used for collision. Our framework uses a novel GPU implementation for inter‐object collision and object self collision, while tool‐object collision, cutting and deformation are assigned to CPU, using multiple threads whenever possible. A novel method that splits cutting operations into four independent tasks running in parallel is designed. Our framework also performs data transfers between CPU and GPU simultaneously with other tasks to reduce their impact on performances. Simulation tests show that when compared to three‐threaded CPU implementations, our GPU accelerated collision is 53–160% faster; and the overall simulation frame rate is 47–98% faster.A software framework taking advantage of parallel processing capabilities of CPUs and GPUs is designed for real‐time interactive cutting simulation of adaptive octree‐based deformable objects. The framework uses a novel GPU implementation for inter‐object collision and object self collision, while other tasks are assigned to CPU, using multiple threads whenever possible. A novel method that splits cutting operations into 4 independent tasks running in parallel is designed. Simulation tests show that when compared to 3‐threaded CPU implementations, our GPU accelerated collision is 53% to 160% faster; and the overall simulation frame rate is 47% to 98% faster.Item Data Abstraction for Visualizing Large Time Series(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Shurkhovetskyy, G.; Andrienko, N.; Andrienko, G.; Fuchs, G.; Chen, Min and Benes, BedrichNumeric time series is a class of data consisting of chronologically ordered observations represented by numeric values. Much of the data in various domains, such as financial, medical and scientific, are represented in the form of time series. To cope with the increasing sizes of datasets, numerous approaches for abstracting large temporal data are developed in the area of data mining. Many of them proved to be useful for time series visualization. However, despite the existence of numerous surveys on time series mining and visualization, there is no comprehensive classification of the existing methods based on the needs of visualization designers. We propose a classification framework that defines essential criteria for selecting an abstraction method with an eye to subsequent visualization and support of users' analysis tasks. We show that approaches developed in the data mining field are capable of creating representations that are useful for visualizing time series data. We evaluate these methods in terms of the defined criteria and provide a summary table that can be easily used for selecting suitable abstraction methods depending on data properties, desirable form of representation, behaviour features to be studied, required accuracy and level of detail, and the necessity of efficient search and querying. We also indicate directions for possible extension of the proposed classification framework.Numeric time series is a class of data consisting of chronologically ordered observations represented by numeric values. Much of the data in various domains, such as financial, medical and scientific, are represented in the form of time series. To cope with the increasing sizes of datasets, numerous approaches for abstracting large temporal data are developed in the area of data mining. Many of them proved to be useful for time series visualization. However, despite the existence of numerous surveys on time series mining and visualization, there is no comprehensive classification of the existing methods based on the needs of visualization designers. We propose a classification framework that defines essential criteria for selecting an abstraction method with an eye to subsequent visualization and support of users' analysis tasks. We show that approaches developed in the data mining field are capable of creating representations that are useful for visualizing time series data.Item Distinctive Approaches to Computer Graphics Education(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Santos, B. Sousa; Dischler, J.‐M.; Adzhiev, V.; Anderson, E.F.; Ferko, A.; Fryazinov, O.; Ilčík, M.; Ilčíková, I.; Slavik, P.; Sundstedt, V.; Svobodova, L.; Wimmer, M.; Zara, J.; Chen, Min and Benes, BedrichThis paper presents the latest advances and research in Computer Graphics education in a nutshell. It is concerned with topics that were presented at the Education Track of the Eurographics Conference held in Lisbon in 2016. We describe works corresponding to approaches to Computer Graphics education that are unconventional in some way and attempt to tackle unsolved problems and challenges regarding the role of arts in computer graphics education, the role of research‐oriented activities in undergraduate education and the interaction among different areas of Computer Graphics, as well as their application to courses or extra‐curricular activities. We present related works addressing these topics and report experiences, successes and issues in implementing the approaches.This paper presents the latest advances and research in Computer Graphics education in a nutshell. It is concerned with topics that were presented at the Education Track of the Eurographics Conference held in Lisbon in 2016. We describe works corresponding to approaches to Computer Graphics education that are unconventional in some way and attempt to tackle unsolved problems and challenges regarding the role of arts in computer graphics education, the role of research‐oriented activities in undergraduate education and the interaction among different areas of Computer Graphics, as well as their application to courses or extra‐curricular activities.Item Easy Generation of Facial Animation Using Motion Graphs(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Serra, J.; Cetinaslan, O.; Ravikumar, S.; Orvalho, V.; Cosker, D.; Chen, Min and Benes, BedrichFacial animation is a time‐consuming and cumbersome task that requires years of experience and/or a complex and expensive set‐up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video‐games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. The common poses are identified using a Euclidean‐based similarity metric and merged into the same node. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph using Dijkstra's algorithm, and coherent noise is introduced by swapping some path nodes with their neighbours. Expression labels, extracted from the database, provide the control mechanism for animation. We present a way of creating facial animation with reduced input that automatically controls timing and pose detail. Our technique easily fits within video‐game and crowd animation contexts, allowing the characters to be more expressive with less effort. Furthermore, it provides a starting point for content creators aiming to bring more life into their characters.Facial animation is a time‐consuming and cumbersome task that requires years of experience and/or a complex and expensive set‐up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video‐games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph, with coherent noise introduced by varying the optimal path that connects the desired nodes. Expression labels, extracted from the database, provide an intuitive control mechanism for animation. Our technique easily fits within video‐game and crowd animation contexts, allowing the characters to be more expressive with less effort.Item Editorial(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Chen, Min; Benes, Bedrich; Chen, Min and Benes, BedrichItem An Efficient Hybrid Incompressible SPH Solver with Interface Handling for Boundary Conditions(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Takahashi, Tetsuya; Dobashi, Yoshinori; Nishita, Tomoyuki; Lin, Ming C.; Chen, Min and Benes, BedrichWe propose a hybrid smoothed particle hydrodynamics solver for efficientlysimulating incompressible fluids using an interface handling method for boundary conditions in the pressure Poisson equation. We blend particle density computed with one smooth and one spiky kernel to improve the robustness against both fluid–fluid and fluid–solid collisions. To further improve the robustness and efficiency, we present a new interface handling method consisting of two components: free surface handling for Dirichlet boundary conditions and solid boundary handling for Neumann boundary conditions. Our free surface handling appropriately determines particles for Dirichlet boundary conditions using Jacobi‐based pressure prediction while our solid boundary handling introduces a new term to ensure the solvability of the linear system. We demonstrate that our method outperforms the state‐of‐the‐art particle‐based fluid solvers.We propose a hybrid smoothed particle hydrodynamics solver for efficiently simulating incompressible fluids using an interface handling method for boundary conditions in the pressure Poisson equation. We blend particle density computed with one smooth and one spiky kernel to improve the robustness against both fluid–fluid and fluid–solid collisions.To further improve the robustness and efficiency, we present a new interface handling method consisting of two components: free surface handling for Dirichlet boundary conditions and solid boundary handling for Neumann boundary conditions.Item Enhanced Visualization of Detected 3D Geometric Differences(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Palma, Gianpaolo; Sabbadin, Manuele; Corsini, Massimiliano; Cignoni, Paolo; Chen, Min and Benes, BedrichThe wide availability of 3D acquisition devices makes viable their use for shape monitoring. The current techniques for the analysis of time‐varying data can efficiently detect actual significant geometric changes and rule out differences due to irrelevant variations (such as sampling, lighting and coverage). On the other hand, the effective visualization of such detected changes can be challenging when we want to show at the same time the original appearance of the 3D model. In this paper, we propose a dynamic technique for the effective visualization of detected differences between two 3D scenes. The presented approach, while retaining the original appearance, allows the user to switch between the two models in a way that enhances the geometric differences that have been detected as significant. Additionally, the same technique is able to visually hides the other negligible, yet visible, variations. The main idea is to use two distinct screen space time‐based interpolation functions for the significant 3D differences and for the small variations to hide. We have validated the proposed approach in a user study on a different class of datasets, proving the objective and subjective effectiveness of the method.The wide availability of 3D acquisition devices makes viable their use for shape monitoring. The current techniques for the analysis of time‐varying data can efficiently detect actual significant geometric changes and rule out differences due to irrelevant variations (such as sampling, lighting and coverage). On the other hand, the effective visualization of such detected changes can be challenging when we want to show at the same time the original appearance of the 3D model. In this paper, we propose a dynamic technique for the effective visualization of detected differences between two 3D scenes.Item Enhancing the Realism of Sketch and Painted Portraits With Adaptable Patches(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Lee, Yin‐Hsuan; Chang, Yu‐Kai; Chang, Yu‐Lun; Lin, I‐Chen; Wang, Yu‐Shuen; Lin, Wen‐Chieh; Chen, Min and Benes, BedrichRealizing unrealistic faces is a complicated task that requires a rich imagination and comprehension of facial structures. When face matching, warping or stitching techniques are applied, existing methods are generally incapable of capturing detailed personal characteristics, are disturbed by block boundary artefacts, or require painting‐photo pairs for training. This paper presents a data‐driven framework to enhance the realism of sketch and portrait paintings based only on photo samples. It retrieves the optimal patches of adaptable shapes and numbers according to the content of the input portrait and collected photos. These patches are then seamlessly stitched by chromatic gain and offset compensation and multi‐level blending. Experiments and user evaluations show that the proposed method is able to generate realistic and novel results for a moderately sized photo collection.Realizing unrealistic faces is a complicated task that requires a rich imagination and comprehension of facial structures. When face matching, warping or stitching techniques are applied, existing methods are generally incapable of capturing detailed personal characteristics, are disturbed by block boundary artefacts, or require painting‐photo pairs for training. This paper presents a data‐driven framework to enhance the realism of sketch and portrait paintings based only on photo samples. It retrieves the optimal patches of adaptable shapes and numbers according to the content of the input portrait and collected photos. These patches are then seamlessly stitched by chromatic gain and offset compensation and multi‐level blending.Item Frame Rate vs Resolution: A Subjective Evaluation of Spatiotemporal Perceived Quality Under Varying Computational Budgets(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Debattista, K.; Bugeja, K.; Spina, S.; Bashford‐Rogers, T.; Hulusic, V.; Chen, Min and Benes, BedrichMaximizing performance for rendered content requires making compromises on quality parameters depending on the computational resources available . Yet, it is currently unclear which parameters best maximize perceived quality. This work investigates perceived quality across computational budgets for the primary spatiotemporal parameters of resolution and frame rate. Three experiments are conducted. Experiment 1 (n = 26) shows that participants prefer fixed frame rates of 60 frames per second (fps) at lower resolutions over 30 fps at higher resolutions. Experiment 2 (n = 24) explores the relationship further with more budgets and quality settings and again finds 60 fps is generally preferred even when more resources are available. Experiment 3 (n = 25) permits the use of adaptive frame rates, and analyses the resource allocation across seven budgets. Results show that while participants allocate more resources to frame rate at lower budgets the situation reverses once higher budgets are available and a frame rate of around 40 fps is achieved. In the overall, the results demonstrate a complex relationship between frame rate and resolution's effects on perceived quality. This relationship can be harnessed, via the results and models presented, to obtain more cost‐effective virtual experiences.Maximizing performance for rendered content requires making compromises on quality parameters depending on the computational resources available. Yet, it is currently unclear which parameters best maximize perceived quality. This work investigates perceived quality across computational budgets for the primary spatiotemporal parameters of resolution and frame rate. Three experiments are conducted. Experiment 1 (n = 26) shows that participants prefer fixed frame rates of 60 frames per second (fps) at lower resolutions over 30 fps at higher resolutions. Experiment 2 (n = 24) explores the relationship further with more budgets and quality settings and again finds 60 fps is generally preferred even when more resources are available. Experiment 3 (n = 25) permits the use of adaptive frame rates, and analyses the resource allocation across seven budgets. Results show that while participants allocate more resources to frame rate at lower budgets the situation reverses once higher budgets are available and a frame rate of around 40 fps is achieved.Item Guidelines for Quantitative Evaluation of Medical Visualizations on the Example of 3D Aneurysm Surface Comparisons(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Saalfeld, P.; Luz, M.; Berg, P.; Preim, B.; Saalfeld, S.; Chen, Min and Benes, BedrichMedical visualizations are highly adapted to a specific medical application scenario. Therefore, many researchers conduct qualitative evaluations with a low number of physicians or medical experts to assess the benefits of their visualization technique. Although this type of research has advantages, it is difficult to reproduce and can be subjectively biased. This makes it problematic to quantify the benefits of a new visualization technique. Quantitative evaluation can objectify research and help bringing new visualization techniques into clinical practice. To support researchers, we present guidelines to quantitatively evaluate medical visualizations, considering specific characteristics and difficulties. We demonstrate the adaptation of these guidelines on the example of comparative aneurysm surface visualizations. We developed three visualization techniques to compare aneurysm volumes. The visualization techniques depict two similar, but not identical aneurysm surface meshes. In a user study with 34 participants and five aneurysm data sets, we assessed objective measures (accuracy and required time) and subjective ratings (suitability and likeability). The provided guidelines and presentation of different stages of the evaluation allow for an easy adaptation to other application areas of medical visualization.Medical visualizations are highly adapted to a specific medical application scenario. Therefore, many researchers conduct qualitative evaluations with a low number of physicians or medical experts to assess the benefits of their visualization technique. Although this type of research has advantages, it is difficult to reproduce and can be subjectively biased. This makes it problematic to quantify the benefits of a new visualization technique. Quantitative evaluation can objectify research and help bringing new visualization techniques into clinical practice.Item Human Factors in Streaming Data Analysis: Challenges and Opportunities for Information Visualization(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Dasgupta, Aritra; Arendt, Dustin L.; Franklin, Lyndsey R.; Wong, Pak Chung; Cook, Kristin A.; Chen, Min and Benes, BedrichReal‐world systems change continuously. In domains such as traffic monitoring or cyber security, such changes occur within short time scales. This results in a streaming data problem and leads to unique challenges for the human in the loop, as analysts have to ingest and make sense of dynamic patterns in real time. While visualizations are being increasingly used by analysts to derive insights from streaming data, we lack a thorough characterization of the human‐centred design problems and a critical analysis of the state‐of‐the‐art solutions that exist for addressing these problems. In this paper, our goal is to fill this gap by studying how the state of the art in streaming data visualization handles the challenges and reflect on the gaps and opportunities. To this end, we have three contributions in this paper: (i) problem characterization for identifying domain‐specific goals and challenges for handling streaming data, (ii) a survey and analysis of the state of the art in streaming data visualization research with a focus on how visualization design meets challenges specific to change perception and (iii) reflections on the design trade‐offs, and an outline of potential research directions for addressing the gaps in the state of the art.Real‐world systems change continuously. In domains such as traffic monitoring or cyber security, such changes occur within short time scales. This results in a streaming data problem and leads to unique challenges for the human in the loop, as analysts have to ingest and make sense of dynamic patterns in real time. While visualizations are being increasingly used by analysts to derive insights from streaming data, we lack a thorough characterization of the human‐centred design problems and a critical analysis of the state‐of‐the‐art solutions that exist for addressing these problems.Item Improved Corners with Multi‐Channel Signed Distance Fields(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Chlumský, V.; Sloup, J.; Šimeček, I.; Chen, Min and Benes, BedrichWe propose an extension to the state‐of‐the‐art text rendering technique based on sampling a 2D signed distance field from a texture. This extension significantly improves the visual quality of sharp corners, which is the most problematic feature to reproduce for the original technique. We achieve this by using a combination of multiple distance fields in conjunction, which together provide a more thorough representation of the given glyph's (or any other 2D shape's) geometry. This multi‐channel distance field representation is described along with its application in shader‐based rendering. The rendering process itself remains very simple and efficient, and is fully compatible with previous monochrome distance fields. The introduced method of multi‐channel distance field construction requires a vector representation of the input shape. A comparative measurement of rendering quality shows that the error in the output image can be reduced by up to several orders of magnitude.We propose an extension to the state‐of‐the‐art text rendering technique based on sampling a 2D signed distance field from a texture. This extension significantly improves the visual quality of sharp corners, which is the most problematic feature to reproduce for the original technique. We achieve this by using a combination of multiple distance fields in conjunction, which together provide a more thorough representation of the given glyph's (or any other 2D shape's) geometry. This multi‐channel distance field representation is described along with its application in shader‐based rendering. The rendering process itself remains very simple and efficient, and is fully compatible with previous monochrome distance fields.Item Interactive Large‐Scale Procedural Forest Construction and Visualization Based on Particle Flow Simulation(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Kohek, Štefan; Strnad, Damjan; Chen, Min and Benes, BedrichInteractive visualization of large forest scenes is challenging due to the large amount of geometric detail that needs to be generated and stored, particularly in scenarios with a moving observer such as forest walkthroughs or overflights. Here, we present a new method for large‐scale procedural forest generation and visualization at interactive rates. We propose a hybrid approach by combining geometry‐based and volumetric modelling techniques with gradually transitioning level of detail (LOD). Nearer trees are constructed using an extended particle flow algorithm, in which particle trails outline the tree ramification in an inverse direction, i.e. from the leaves towards the roots. Reduced geometric representation of a tree is obtained by subsampling the trails. For distant trees, a new volumetric rendering technique in pixel‐space is introduced, which avoids geometry formation altogether and enables visualization of vast forest areas with millions of unique trees. We demonstrate that a GPU‐based implementation of the proposed method provides interactive frame rates in forest overflight scenarios, where new trees are constructed and their LOD adjusted on the fly.Interactive visualization of large forest scenes is challenging due to the large amount of geometric detail that needs to be generated and stored, particularly in scenarios with a moving observer such as forest walkthroughs or overflights. Here, we present a new method for large‐scale procedural forest generation and visualization at interactive rates. We propose a hybrid approach by combining geometry‐based and volumetric modelling techniques with gradually transitioning level of detail (LOD). Nearer trees are constructed using an extended particle flow algorithm, in which particle trails outline the tree ramification in an inverse direction, i.e. from the leaves towards the roots. Reduced geometric representation of a tree is obtained by subsampling the trails. For distant trees, a new volumetric rendering technique in pixel‐space is introduced, which avoids geometry formation altogether and enables visualization of vast forest areas with millions of unique trees.Item Issue Information(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Chen, Min and Benes, Bedrich