36-Issue 1
Permanent URI for this collection
Browse
Browsing 36-Issue 1 by Issue Date
Now showing 1 - 20 of 24
Results Per Page
Sort Options
Item Output-Sensitive Filtering of Streaming Volume Data(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Solteszova, Veronika; Birkeland, Åsmund; Stoppel, Sergej; Viola, Ivan; Bruckner, Stefan; Chen, Min and Zhang, Hao (Richard)Real‐time volume data acquisition poses substantial challenges for the traditional visualization pipeline where data enhancement is typically seen as a pre‐processing step. In the case of 4D ultrasound data, for instance, costly processing operations to reduce noise and to remove artefacts need to be executed for every frame. To enable the use of high‐quality filtering operations in such scenarios, we propose an output‐sensitive approach to the visualization of streaming volume data. Our method evaluates the potential contribution of all voxels to the final image, allowing us to skip expensive processing operations that have little or no effect on the visualization. As filtering operations modify the data values which may affect the visibility, our main contribution is a fast scheme to predict their maximum effect on the final image. Our approach prioritizes filtering of voxels with high contribution to the final visualization based on a maximal permissible error per pixel. With zero permissible error, the optimized filtering will yield a result that is identical to filtering of the entire volume. We provide a thorough technical evaluation of the approach and demonstrate it on several typical scenarios that require on‐the‐fly processing.Real‐time volume data acquisition poses substantial challenges for the traditional visualization pipeline where data enhancement is typically seen as a pre‐processing step. In the case of 4D ultrasound data, for instance, costly processing operations to reduce noise and to remove artefacts need to be executed for every frame. To enable the use of high‐quality filtering operations in such scenarios, we propose an outputsensitive approach to the visualization of streaming volume data. Our method evaluates the potential contribution of all voxels to the final image, allowing us to skip expensive processing operations that have little or no effect on the visualization As filtering operations modify the data values which may affect the visibility, our main contribution is a fast scheme to predict their maximum effect on the final image. Our approach prioritizes filtering of voxels with high contribution to the final visualization based on a maximal permissible error per pixel. With zero permissible error, the optimized filtering will yield a result that is identical to filtering of the entire volume. We provide a thorough technical evaluation of the approach and demonstrate it on several typical scenarios that require on‐the‐fly processing.Item Multi-Modal Perception for Selective Rendering(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Harvey, Carlo; Debattista, Kurt; Bashford-Rogers, Thomas; Chalmers, Alan; Chen, Min and Zhang, Hao (Richard)A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps. The multi‐modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi‐modal VEs.A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps.Item A Survey of Surface Reconstruction from Point Clouds(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Berger, Matthew; Tagliasacchi, Andrea; Seversky, Lee M.; Alliez, Pierre; Guennebaud, Gaël; Levine, Joshua A.; Sharf, Andrei; Silva, Claudio T.; Chen, Min and Zhang, Hao (Richard)The area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contain a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece‐wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations—not necessarily the explicit geometry. We survey the field of surface reconstruction, and provide a categorization with respect to priors, data imperfections and reconstruction output. By considering a holistic view of surface reconstruction, we show a detailed characterization of the field, highlight similarities between diverse reconstruction techniques and provide directions for future work in surface reconstruction.The area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contain a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece‐wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations—not necessarily the explicit geometryItem Editorial(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Chen, Min; Zhang, Hao (Richard); Chen, Min and Zhang, Hao (Richard)Item Issue Information(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Chen, Min and Zhang, Hao (Richard)Item Consistent Partial Matching of Shape Collections via Sparse Modeling(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Cosmo, L.; Rodolà, E.; Albarelli, A.; Mémoli, F.; Cremers, D.; Chen, Min and Zhang, Hao (Richard)Recent efforts in the area of joint object matching approach the problem by taking as input a set of pairwise maps, which are then jointly optimized across the whole collection so that certain accuracy and consistency criteria are satisfied. One natural requirement is cycle‐consistency—namely the fact that map composition should give the same result regardless of the path taken in the shape collection. In this paper, we introduce a novel approach to obtain consistent matches without requiring initial pairwise solutions to be given as input. We do so by optimizing a joint measure of metric distortion directly over the space of cycle‐consistent maps; in order to allow for partially similar and extra‐class shapes, we formulate the problem as a series of quadratic programs with sparsity‐inducing constraints, making our technique a natural candidate for analysing collections with a large presence of outliers. The particular form of the problem allows us to leverage results and tools from the field of evolutionary game theory. This enables a highly efficient optimization procedure which assures accurate and provably consistent solutions in a matter of minutes in collections with hundreds of shapes.Recent efforts in the area of joint object matching approach the problem by taking as input a set of pairwise maps, which are then jointly optimized across the whole collection so that certain accuracy and consistency criteria are satisfied. One natural requirement is cycleconsistency— namely the fact that map composition should give the same result regardless of the path taken in the shape collection. In this paper, we introduce a novel approach to obtain among partially similar shapes consistent matches without requiring initial pairwise solutions to be given as input.Item Digital Fabrication Techniques for Cultural Heritage: A Survey(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Scopigno, R.; Cignoni, P.; Pietroni, N.; Callieri, M.; Dellepiane, M.; Chen, Min and Zhang, Hao (Richard)Digital fabrication devices exploit basic technologies in order to create tangible reproductions of 3D digital models. Although current 3D printing pipelines still suffer from several restrictions, accuracy in reproduction has reached an excellent level. The manufacturing industry has been the main domain of 3D printing applications over the last decade. Digital fabrication techniques have also been demonstrated to be effective in many other contexts, including the consumer domain. The Cultural Heritage is one of the new application contexts and is an ideal domain to test the flexibility and quality of this new technology. This survey overviews the various fabrication technologies, discussing their strengths, limitations and costs. Various successful uses of 3D printing in the Cultural Heritage are analysed, which should also be useful for other application contexts. We review works that have attempted to extend fabrication technologies in order to deal with the specific issues in the use of digital fabrication in the Cultural Heritage. Finally, we also propose areas for future research.Digital fabrication devices exploit basic technologies in order to create tangible reproductions of 3D digital models. Although current 3D printing pipelines still suffer from several restrictions, accuracy in reproduction has reached an excellent level. The manufacturing industry has been themain domain of 3D printing applications over the last decade.Digital fabrication techniques have also been demonstrated to be effective in many other contexts, including the consumer domain. The Cultural Heritage is one of the new application contexts and is an ideal domain to test the flexibility and quality of this new technology.Item A Survey of Visualization for Live Cell Imaging(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Pretorius, A. J.; Khan, I. A.; Errington, R. J.; Chen, Min and Zhang, Hao (Richard)Live cell imaging is an important biomedical research paradigm for studying dynamic cellular behaviour. Although phenotypic data derived from images are difficult to explore and analyse, some researchers have successfully addressed this with visualization. Nonetheless, visualization methods for live cell imaging data have been reported in an ad hoc and fragmented fashion. This leads to a knowledge gap where it is difficult for biologists and visualization developers to evaluate the advantages and disadvantages of different visualization methods, and for visualization researchers to gain an overview of existing work to identify research priorities. To address this gap, we survey existing visualization methods for live cell imaging from a visualization research perspective for the first time. Based on recent visualization theory, we perform a structured qualitative analysis of visualization methods that includes characterizing the domain and data, abstracting tasks, and describing visual encoding and interaction design. Based on our survey, we identify and discuss research gaps that future work should address: the broad analytical context of live cell imaging; the importance of behavioural comparisons; links with dynamic data visualization; the consequences of different data modalities; shortcomings in interactive support; and, in addition to analysis, the value of the presentation of phenotypic data and insights to other stakeholders.Live cell imaging is an important biomedical research paradigm for studying dynamic cellular behaviour. Although phenotypic data derived from images are difficult to explore and analyse, some researchers have successfully addressed this with visualization. Nonetheless, visualization methods for live cell imaging data have been reported in an ad hoc and fragmented fashion. This leads to a knowledge gap where it is difficult for biologists and visualization developers to evaluate the advantages and disadvantages of different visualization methods, and for visualization researchers to gain an overview of existing work to identify research priorities. To address this gap, we survey existing visualization methods for live cell imaging from a visualization research perspective for the first time.Item 2017 Cover Image: Mixing Bowl(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Marra, Alessia; Nitti, Maurizio; Papas, Marios; Müller, Thomas; Gross, Markus; Jarosz, Wojciech; ovák, Jan; Chen, Min and Zhang, Hao (Richard)Item Synthesis of Human Skin Pigmentation Disorders(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Barros, R. S.; Walter, M.; Chen, Min and Zhang, Hao (Richard)Changes in the human pigmentary system can lead to imbalances in the distribution of melanin in the skin resulting in artefacts known as pigmented lesions. Our work takes as departing point biological data regarding human skin, the pigmentary system and the melanocytes life cycle and presents a reaction–diffusion model for the simulation of the shape features of human‐pigmented lesions. The simulation of such disorders has many applications in dermatology, for instance, to assist dermatologists in diagnosis and training related to pigmentation disorders. Our study focuses, however, on applications related to computer graphics. Thus, we also present a method to seamless blend the results of our simulation model in images of healthy human skin. In this context, our model contributes to the generation of more realistic skin textures and therefore more realistic human models. In order to assess the quality of our results, we measured and compared the characteristics of the shape of real and synthesized pigmented lesions. We show that synthesized and real lesions have no statistically significant differences in their shape features. Visually, our results also compare favourably with images of real lesions, being virtually indistinguishable from real images.Changes in the human pigmentary system can lead to imbalances in the distribution of melanin in the skin resulting in artefacts known as pigmented lesions. Our work takes as departing point biological data regarding human skin, the pigmentary system and the melanocytes life cycle and presents a reaction‐diffusion model for the simulation of the shape features of human‐pigmented lesions. The simulation of such disorders has many applications in dermatology, for instance, to assist dermatologists in diagnosis and training related to pigmentation disorders. Our study focuses, however, on applications related to computer graphics. Thus, we also present a method to seamless blend the results of our simulation model in images of healthy human skin.Item A Taxonomy and Survey of Dynamic Graph Visualization(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Beck, Fabian; Burch, Michael; Diehl, Stephan; Weiskopf, Daniel; Chen, Min and Zhang, Hao (Richard)Dynamic graph visualization focuses on the challenge of representing the evolution of relationships between entities in readable, scalable and effective diagrams. This work surveys the growing number of approaches in this discipline. We derive a hierarchical taxonomy of techniques by systematically categorizing and tagging publications. While static graph visualizations are often divided into node‐link and matrix representations, we identify the representation of time as the major distinguishing feature for dynamic graph visualizations: either graphs are represented as animated diagrams or as static charts based on a timeline. Evaluations of animated approaches focus on dynamic stability for preserving the viewer's mental map or, in general, compare animated diagrams to timeline‐based ones. A bibliographic analysis provides insights into the organization and development of the field and its community. Finally, we identify and discuss challenges for future research. We also provide feedback from experts, collected with a questionnaire, which gives a broad perspective of these challenges and the current state of the field.Dynamic graph visualization focuses on the challenge of representing the evolution of relationships between entities in readable, scalable and effective diagrams. This work surveys the growing number of approaches in this discipline. We derive a hierarchical taxonomy of techniques by systematically categorizing and tagging publications. While static graph visualizations are often divided into node‐link and matrix representations, we identify the representation of time as the major distinguishing feature for dynamic graph visualizations: either graphs are represented as animated diagrams or as static charts based on a timeline. Evaluations of animated approaches focus on dynamic stability for preserving the viewer's mental map or, in general, compare animated diagrams to timeline‐based ones.Item Predicting Visual Perception of Material Structure in Virtual Environments(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Filip, J.; Vávra, R.; Havlíček, M.; Krupička, M.; Chen, Min and Zhang, Hao (Richard)One of the most accurate yet still practical representation of material appearance is the Bidirectional Texture Function (BTF). The BTF can be viewed as an extension of Bidirectional Reflectance Distribution Function (BRDF) for additional spatial information that includes local visual effects such as shadowing, interreflection, subsurface‐scattering, etc. However, the shift from BRDF to BTF represents not only a huge leap in respect to the realism of material reproduction, but also related high memory and computational costs stemming from the storage and processing of massive BTF data. In this work, we argue that each opaque material, regardless of its surface structure, can be safely substituted by a BRDF without the introduction of a significant perceptual error when viewed from an appropriate distance. Therefore, we ran a set of psychophysical studies over 25 materials to determine so‐called critical viewing distances, i.e. the minimal distances at which the material spatial structure (texture) cannot be visually discerned. Our analysis determined such typical distances typical for several material categories often used in interior design applications. Furthermore, we propose a combination of computational features that can predict such distances without the need for a psychophysical study. We show that our work can significantly reduce rendering costs in applications that process complex virtual scenes.One of the most accurate yet still practical representation of material appearance is the Bidirectional Texture Function (BTF). The BTF can be viewed as an extension of Bidirectional Reflectance Distribution Function (BRDF) for additional spatial information that includes local visual effects such as shadowing, interreflection, subsurface‐scattering, etc. However, the shift from BRDF to BTF represents not only a huge leap in respect to the realism of material reproduction, but also related high memory and computational costs stemming from the storage and processing of massive BTF data. In this work, we argue that each opaque material, regardless of its surface structure, can be safely substituted by a BRDF without the introduction of a significant perceptual error when viewed from an appropriate distance. Therefore, we ran a set of psychophysical studies over 25 materials to determine so‐called critical viewing distances, i.e. the minimal distances at which the material spatial structure (texture) cannot be visually discerned. Our analysis determined such typical distances typical for several material categories often used in interior design applications.Item Synthesizing Ornamental Typefaces(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Zhang, Junsong; Wang, Yu; Xiao, Weiyi; Luo, Zhenshan; Chen, Min and Zhang, Hao (Richard)We present a method for creating ornamental typeface images. Ornamental typefaces are a composite artwork made from the assemblage of images that carry similar semantics to words. These appealing word‐art works often attract the attention of more people and convey more meaningful information than general typefaces. However, traditional ornamental typefaces are usually created by skilled artists, which involves tedious manual processes, especially when searching for appropriate materials and assembling them. Hence, we aim to provide an easy way to create ornamental typefaces for novices. How to combine users' design intentions with image semantic and shape information to obtain readable and appealing ornamental typefaces is the key challenge to generate ornamental typefaces. To address this problem, we first provide a scribble‐based interface for users to segment the input typeface into strokes according to their design concepts. To ensure the consistency of the image semantics and stroke shape, we then define a semantic‐shape similarity metric to select a set of suitable images. Finally, to beautify the typeface structure, an optional optimal strategy is investigated. Experimental results and user studies show that the proposed algorithm effectively generates attractive and readable ornamental typefaces.We present a method for creating ornamental typeface images. Ornamental typefaces are a composite artwork made from the assemblage of images that carry similar semantics to words. These appealing word‐art works often attract the attention of more people and convey more meaningful information than general typefaces. However, traditional ornamental typefaces are usually created by skilled artists, which involves tedious manual processes, especially when searching for appropriate materials and assembling them. Hence, we aim to provide an easy way to create ornamental typefaces for novices.Item Accurate and Efficient Computation of Laplacian Spectral Distances and Kernels(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Patané, Giuseppe; Chen, Min and Zhang, Hao (Richard)This paper introduces the Laplacian spectral distances, as a function that resembles the usual distance map, but exhibits properties (e.g. smoothness, locality, invariance to shape transformations) that make them useful to processing and analysing geometric data. Spectral distances are easily defined through a filtering of the Laplacian eigenpairs and reduce to the heat diffusion, wave, biharmonic and commute‐time distances for specific filters. In particular, the smoothness of the spectral distances and the encoding of local and global shape properties depend on the convergence of the filtered eigenvalues to zero. Instead of applying a truncated spectral approximation or prolongation operators, we propose a computation of Laplacian distances and kernels through the solution of sparse linear systems. Our approach is free of user‐defined parameters, overcomes the evaluation of the Laplacian spectrum and guarantees a higher approximation accuracy than previous work.Item Constructive Visual Analytics for Text Similarity Detection(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Abdul-Rahman, A.; Roe, G.; Olsen, M.; Gladstone, C.; Whaling, R.; Cronk, N.; Morrissey, R.; Chen, M.; Chen, Min and Zhang, Hao (Richard)Detecting similarity between texts is a frequently encountered text mining task. Because the measurement of similarity is typically composed of a number of metrics, and some measures are sensitive to subjective interpretation, a generic detector obtained using machine learning often has difficulties balancing the roles of different metrics according to the semantic context exhibited in a specific collection of texts. In order to facilitate human interaction in a visual analytics process for text similarity detection, we first map the problem of pairwise sequence comparison to that of image processing, allowing patterns of similarity to be visualized as a 2D pixelmap. We then devise a visual interface to enable users to construct and experiment with different detectors using primitive metrics, in a way similar to constructing an image processing pipeline. We deployed this new approach for the identification of commonplaces in 18th‐century literary and print culture. Domain experts were then able to make use of the prototype system to derive new scholarly discoveries and generate new hypotheses.Detecting similarity between texts is a frequently encountered text mining task. Because the measurement of similarity is typically composed of a number of metrics, and some measures are sensitive to subjective interpretation, a generic detector obtained using machine learning often has difficulties balancing the roles of different metrics according to the semantic context exhibited in a specific collection of texts. In order to facilitate human interaction in a visual analytics process for text similarity detection, we first map the problem of pairwise sequence comparison to that of image processing, allowing patterns of similarity to be visualized as a 2D pixelmap.We then devise a visual interface to enable users to construct and experiment with different detectors using primitive metrics, in a way similar to constructing an image processing pipeline. We deployed this new approach for the identification of commonplaces in 18th‐century literary and print culture. Domain experts were then able to make use of the prototype system to derive new scholarly discoveries and generate new hypotheses.Item Data‐Driven Shape Analysis and Processing(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Xu, Kai; Kim, Vladimir G.; Huang, Qixing; Kalogerakis, Evangelos; Chen, Min and Zhang, Hao (Richard)Data‐driven methods serve an increasingly important role in discovering geometric, structural and semantic relationships between shapes. In contrast to traditional approaches that process shapes in isolation of each other, data‐driven methods aggregate information from 3D model collections to improve the analysis, modelling and editing of shapes. Data‐driven methods are also able to learn computational models that reason about properties and relationships of shapes without relying on hard‐coded rules or explicitly programmed instructions. Through reviewing the literature, we provide an overview of the main concepts and components of these methods, as well as discuss their application to classification, segmentation, matching, reconstruction, modelling and exploration, as well as scene analysis and synthesis. We conclude our report with ideas that can inspire future research in data‐driven shape analysis and processing.Data‐driven methods serve an increasingly important role in discovering geometric, structural and semantic relationships between shapes. In contrast to traditional approaches that process shapes in isolation of each other, data‐driven methods aggregate information from 3D model collections to improve the analysis, modelling and editing of shapes. Data‐driven methods are also able to learn computational models that reason about properties and relationships of shapes without relying on hard‐coded rules or explicitly programmed instructions. Through reviewing the literature, we provide an overview of the main concepts and components of these methods, as well as discuss their application to classification, segmentation, matching, reconstruction, modelling and exploration, as well as scene analysis and synthesis. We conclude our report with ideas that can inspire future research in data‐driven shape analysis and processing.Item Graphs in Scientific Visualization: A Survey(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Wang, Chaoli; Tao, Jun; Chen, Min and Zhang, Hao (Richard)Graphs represent general node‐link diagrams and have long been utilized in scientific visualization for data organization and management. However, using graphs as a visual representation and interface for navigating and exploring scientific data sets has a much shorter history, yet the amount of work along this direction is clearly on the rise in recent years. In this paper, we take a holistic perspective and survey graph‐based representations and techniques for scientific visualization. Specifically, we classify these representations and techniques into four categories, namely partition‐wise, relationship‐wise, structure‐wise and provenance‐wise. We survey related publications in each category, explaining the roles of graphs in related work and highlighting their similarities and differences. At the end, we reexamine these related publications following the graph‐based visualization pipeline. We also point out research trends and remaining challenges in graph‐based representations and techniques for scientific visualization.Graphs represent general node‐link diagrams and have long been utilized in scientific visualization for data organization and management. However, using graphs as a visual representation and interface for navigating and exploring scientific data sets has a much shorter history, yet the amount of work along this direction is clearly on the rise in recent years. In this paper, we take a holistic perspective and survey graph‐based representations and techniques for scientific visualization.Item Inversion Fractals and Iteration Processes in the Generation of Aesthetic Patterns(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Gdawiec, K.; Chen, Min and Zhang, Hao (Richard)In this paper, we generalize the idea of star‐shaped set inversion fractals using iterations known from fixed point theory. We also extend the iterations from real parameters to so‐called ‐system numbers and proposed the use of switching processes. All the proposed generalizations allowed us to obtain new and diverse fractal patterns that can be used, e.g. as textile and ceramics patterns. Moreover, we show that in the chaos game for iterated function systems—which is similar to the inversion fractals generation algorithm—the proposed generalizations do not give interesting results.In this paper, we generalize the idea of star‐shaped set inversion fractals using iterations known from fixed point theory. We also extend the iterations from real parameters to so‐called ‐system numbers and proposed the use of switching processes. All the proposed generalizations allowed us to obtain new and diverse fractal patterns that can be used, e.g. as textile and ceramics patterns. Moreover, we show that in the chaos game for iterated function systems—which is similar to the inversion fractals generation algorithm—the proposed generalizations do not give interesting results.Item Visualization and Quantification for Interactive Analysis of Neural Connectivity in Drosophila(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Swoboda, N.; Moosburner, J.; Bruckner, S.; Yu, J. Y.; Dickson, B. J.; Bühler, K.; Chen, Min and Zhang, Hao (Richard)Neurobiologists investigate the brain of the common fruit fly to discover neural circuits and link them to complex behaviour. Formulating new hypotheses about connectivity requires potential connectivity information between individual neurons, indicated by overlaps of arborizations of two or more neurons. As the number of higher order overlaps (i.e. overlaps of three or more arborizations) increases exponentially with the number of neurons under investigation, visualization is impeded by clutter and quantification becomes a burden. Existing solutions are restricted to visual or quantitative analysis of pairwise overlaps, as they rely on precomputed overlap data. We present a novel tool that complements existing methods for potential connectivity exploration by providing for the first time the possibility to compute and visualize higher order arborization overlaps on the fly and to interactively explore this information in both its spatial anatomical context and on a quantitative level. Qualitative evaluation by neuroscientists and non‐experts demonstrated the utility and usability of the tool.Neurobiologists investigate the brain of the common fruit fly Drosophila melanogaster to discover neural circuits and link them to complex behaviour. Formulating new hypotheses about connectivity requires potential connectivity information between individual neurons, indicated by overlaps of arborizations of two or more neurons. As the number of higher order overlaps (i.e. overlaps of three or more arborizations) increases exponentially with the number of neurons under investigation, visualization is impeded by clutter and quantification becomes a burden.Item Constrained Convex Space Partition for Ray Tracing in Architectural Environments(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Maria, M.; Horna, S.; Aveneau, L.; Chen, Min and Zhang, Hao (Richard)This paper explores constrained convex space partition (CCSP) as a new acceleration structure for ray tracing. A CCSP is a graph, representing a space partition made up of empty convex volumes. The scene geometry is located on the boundary of the convex volumes. Therefore, each empty volume is bounded with two kinds of faces: occlusive ones (belonging to the scene geometry), and non‐occlusive ones. Given a ray, ray casting is performed by traversing the CCSP one volume at a time, until it hits the scene geometry. In this paper, this idea is applied to architectural scenes. We show that CCSP allows to cast several hundreds of millions of rays per second, even if they are not spatially coherent. Experiments are performed for large furnished buildings made up of hundreds of millions of polygons and containing thousands of light sources.This paper explores constrained convex space partition (CCSP) as a new acceleration structure for ray tracing. A CCSP is a graph, representing a space partition made up of empty convex volumes. The scene geometry is located on the boundary of the convex volumes. Therefore, each empty volume is bounded with two kinds of faces: occlusive ones (belonging to the scene geometry), and non‐occlusive ones. Given a ray, ray casting is performed by traversing the CCSP one volume at a time, until it hits the scene geometry. In this paper, this idea is applied to architectural scenes.We show that CCSP allows to cast several hundreds of millions of rays per second, even if they are not spatially coherent. Experiments are performed for large furnished buildings made up of hundreds of millions of polygons and containing thousands of light sources.