36-Issue 1

Permanent URI for this collection

Issue Information

Issue Information

Report

2017 Cover Image: Mixing Bowl

Marra, Alessia
Nitti, Maurizio
Papas, Marios
Müller, Thomas
Gross, Markus
Jarosz, Wojciech
ovák, Jan
Editorial

Editorial

Chen, Min
Zhang, Hao (Richard)
Articles

Digital Fabrication Techniques for Cultural Heritage: A Survey

Scopigno, R.
Cignoni, P.
Pietroni, N.
Callieri, M.
Dellepiane, M.
Articles

Inversion Fractals and Iteration Processes in the Generation of Aesthetic Patterns

Gdawiec, K.
Articles

Sparse GPU Voxelization of Yarn‐Level Cloth

Lopez‐Moreno, Jorge
Miraut, David
Cirio, Gabriel
Otaduy, Miguel A.
Articles

A Survey of Visualization for Live Cell Imaging

Pretorius, A. J.
Khan, I. A.
Errington, R. J.
Articles

Synthesizing Ornamental Typefaces

Zhang, Junsong
Wang, Yu
Xiao, Weiyi
Luo, Zhenshan
Articles

Discovering Structured Variations Via Template Matching

Ceylan, Duygu
Dang, Minh
Mitra, Niloy J.
Neubert, Boris
Pauly, Mark
Articles

A Taxonomy and Survey of Dynamic Graph Visualization

Beck, Fabian
Burch, Michael
Diehl, Stephan
Weiskopf, Daniel
Articles

Predicting Visual Perception of Material Structure in Virtual Environments

Filip, J.
Vávra, R.
Havlíček, M.
Krupička, M.
Articles

Data‐Driven Shape Analysis and Processing

Xu, Kai
Kim, Vladimir G.
Huang, Qixing
Kalogerakis, Evangelos
Articles

Multi-Modal Perception for Selective Rendering

Harvey, Carlo
Debattista, Kurt
Bashford-Rogers, Thomas
Chalmers, Alan
Articles

Accurate and Efficient Computation of Laplacian Spectral Distances and Kernels

Patané, Giuseppe
Articles

Visualization and Quantification for Interactive Analysis of Neural Connectivity in Drosophila

Swoboda, N.
Moosburner, J.
Bruckner, S.
Yu, J. Y.
Dickson, B. J.
Bühler, K.
Articles

Towards Globally Optimal Normal Orientations for Large Point Clouds

Schertler, Nico
Savchynskyy, Bogdan
Gumhold, Stefan
Articles

Output-Sensitive Filtering of Streaming Volume Data

Solteszova, Veronika
Birkeland, Åsmund
Stoppel, Sergej
Viola, Ivan
Bruckner, Stefan
Articles

Consistent Partial Matching of Shape Collections via Sparse Modeling

Cosmo, L.
Rodolà, E.
Albarelli, A.
Mémoli, F.
Cremers, D.
Articles

Constructive Visual Analytics for Text Similarity Detection

Abdul-Rahman, A.
Roe, G.
Olsen, M.
Gladstone, C.
Whaling, R.
Cronk, N.
Morrissey, R.
Chen, M.
Articles

Partial Functional Correspondence

Rodolà, E.
Cosmo, L.
Bronstein, M. M.
Torsello, A.
Cremers, D.
Articles

Synthesis of Human Skin Pigmentation Disorders

Barros, R. S.
Walter, M.
Articles

Graphs in Scientific Visualization: A Survey

Wang, Chaoli
Tao, Jun
Articles

Constrained Convex Space Partition for Ray Tracing in Architectural Environments

Maria, M.
Horna, S.
Aveneau, L.
Articles

A Survey of Surface Reconstruction from Point Clouds

Berger, Matthew
Tagliasacchi, Andrea
Seversky, Lee M.
Alliez, Pierre
Guennebaud, Gaël
Levine, Joshua A.
Sharf, Andrei
Silva, Claudio T.


BibTeX (36-Issue 1)
                
@article{
10.1111:cgf.13058,
journal = {Computer Graphics Forum}, title = {{
Issue Information}},
author = {}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13058}
}
                
@article{
10.1111:cgf.13093,
journal = {Computer Graphics Forum}, title = {{
2017 Cover Image: Mixing Bowl}},
author = {
Marra, Alessia
 and
Nitti, Maurizio
 and
Papas, Marios
 and
Müller, Thomas
 and
Gross, Markus
 and
Jarosz, Wojciech
 and
ovák, Jan
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13093}
}
                
@article{
10.1111:cgf.13094,
journal = {Computer Graphics Forum}, title = {{
Editorial}},
author = {
Chen, Min
 and
Zhang, Hao (Richard)
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13094}
}
                
@article{
10.1111:cgf.12781,
journal = {Computer Graphics Forum}, title = {{
Digital Fabrication Techniques for Cultural Heritage: A Survey}},
author = {
Scopigno, R.
 and
Cignoni, P.
 and
Pietroni, N.
 and
Callieri, M.
 and
Dellepiane, M.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12781}
}
                
@article{
10.1111:cgf.12783,
journal = {Computer Graphics Forum}, title = {{
Inversion Fractals and Iteration Processes in the Generation of Aesthetic Patterns}},
author = {
Gdawiec, K.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12783}
}
                
@article{
10.1111:cgf.12782,
journal = {Computer Graphics Forum}, title = {{
Sparse GPU Voxelization of Yarn‐Level Cloth}},
author = {
Lopez‐Moreno, Jorge
 and
Miraut, David
 and
Cirio, Gabriel
 and
Otaduy, Miguel A.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12782}
}
                
@article{
10.1111:cgf.12784,
journal = {Computer Graphics Forum}, title = {{
A Survey of Visualization for Live Cell Imaging}},
author = {
Pretorius, A. J.
 and
Khan, I. A.
 and
Errington, R. J.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12784}
}
                
@article{
10.1111:cgf.12785,
journal = {Computer Graphics Forum}, title = {{
Synthesizing Ornamental Typefaces}},
author = {
Zhang, Junsong
 and
Wang, Yu
 and
Xiao, Weiyi
 and
Luo, Zhenshan
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12785}
}
                
@article{
10.1111:cgf.12788,
journal = {Computer Graphics Forum}, title = {{
Discovering Structured Variations Via Template Matching}},
author = {
Ceylan, Duygu
 and
Dang, Minh
 and
Mitra, Niloy J.
 and
Neubert, Boris
 and
Pauly, Mark
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12788}
}
                
@article{
10.1111:cgf.12791,
journal = {Computer Graphics Forum}, title = {{
A Taxonomy and Survey of Dynamic Graph Visualization}},
author = {
Beck, Fabian
 and
Burch, Michael
 and
Diehl, Stephan
 and
Weiskopf, Daniel
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12791}
}
                
@article{
10.1111:cgf.12789,
journal = {Computer Graphics Forum}, title = {{
Predicting Visual Perception of Material Structure in Virtual Environments}},
author = {
Filip, J.
 and
Vávra, R.
 and
Havlíček, M.
 and
Krupička, M.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12789}
}
                
@article{
10.1111:cgf.12790,
journal = {Computer Graphics Forum}, title = {{
Data‐Driven Shape Analysis and Processing}},
author = {
Xu, Kai
 and
Kim, Vladimir G.
 and
Huang, Qixing
 and
Kalogerakis, Evangelos
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12790}
}
                
@article{
10.1111:cgf.12793,
journal = {Computer Graphics Forum}, title = {{
Multi-Modal Perception for Selective Rendering}},
author = {
Harvey, Carlo
 and
Debattista, Kurt
 and
Bashford-Rogers, Thomas
 and
Chalmers, Alan
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12793}
}
                
@article{
10.1111:cgf.12794,
journal = {Computer Graphics Forum}, title = {{
Accurate and Efficient Computation of Laplacian Spectral Distances and Kernels}},
author = {
Patané, Giuseppe
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12794}
}
                
@article{
10.1111:cgf.12792,
journal = {Computer Graphics Forum}, title = {{
Visualization and Quantification for Interactive Analysis of Neural Connectivity in Drosophila}},
author = {
Swoboda, N.
 and
Moosburner, J.
 and
Bruckner, S.
 and
Yu, J. Y.
 and
Dickson, B. J.
 and
Bühler, K.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12792}
}
                
@article{
10.1111:cgf.12795,
journal = {Computer Graphics Forum}, title = {{
Towards Globally Optimal Normal Orientations for Large Point Clouds}},
author = {
Schertler, Nico
 and
Savchynskyy, Bogdan
 and
Gumhold, Stefan
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12795}
}
                
@article{
10.1111:cgf.12799,
journal = {Computer Graphics Forum}, title = {{
Output-Sensitive Filtering of Streaming Volume Data}},
author = {
Solteszova, Veronika
 and
Birkeland, Åsmund
 and
Stoppel, Sergej
 and
Viola, Ivan
 and
Bruckner, Stefan
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12799}
}
                
@article{
10.1111:cgf.12796,
journal = {Computer Graphics Forum}, title = {{
Consistent Partial Matching of Shape Collections via Sparse Modeling}},
author = {
Cosmo, L.
 and
Rodolà, E.
 and
Albarelli, A.
 and
Mémoli, F.
 and
Cremers, D.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12796}
}
                
@article{
10.1111:cgf.12798,
journal = {Computer Graphics Forum}, title = {{
Constructive Visual Analytics for Text Similarity Detection}},
author = {
Abdul-Rahman, A.
 and
Roe, G.
 and
Olsen, M.
 and
Gladstone, C.
 and
Whaling, R.
 and
Cronk, N.
 and
Morrissey, R.
 and
Chen, M.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12798}
}
                
@article{
10.1111:cgf.12797,
journal = {Computer Graphics Forum}, title = {{
Partial Functional Correspondence}},
author = {
Rodolà, E.
 and
Cosmo, L.
 and
Bronstein, M. M.
 and
Torsello, A.
 and
Cremers, D.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12797}
}
                
@article{
10.1111:cgf.12943,
journal = {Computer Graphics Forum}, title = {{
Synthesis of Human Skin Pigmentation Disorders}},
author = {
Barros, R. S.
 and
Walter, M.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12943}
}
                
@article{
10.1111:cgf.12800,
journal = {Computer Graphics Forum}, title = {{
Graphs in Scientific Visualization: A Survey}},
author = {
Wang, Chaoli
 and
Tao, Jun
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12800}
}
                
@article{
10.1111:cgf.12801,
journal = {Computer Graphics Forum}, title = {{
Constrained Convex Space Partition for Ray Tracing in Architectural Environments}},
author = {
Maria, M.
 and
Horna, S.
 and
Aveneau, L.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12801}
}
                
@article{
10.1111:cgf.12802,
journal = {Computer Graphics Forum}, title = {{
A Survey of Surface Reconstruction from Point Clouds}},
author = {
Berger, Matthew
 and
Tagliasacchi, Andrea
 and
Seversky, Lee M.
 and
Alliez, Pierre
 and
Guennebaud, Gaël
 and
Levine, Joshua A.
 and
Sharf, Andrei
 and
Silva, Claudio T.
}, year = {
2017},
publisher = {
© 2017 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12802}
}

Browse

Recent Submissions

Now showing 1 - 24 of 24
  • Item
    Issue Information
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Chen, Min and Zhang, Hao (Richard)
  • Item
    2017 Cover Image: Mixing Bowl
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Marra, Alessia; Nitti, Maurizio; Papas, Marios; Müller, Thomas; Gross, Markus; Jarosz, Wojciech; ovák, Jan; Chen, Min and Zhang, Hao (Richard)
  • Item
    Editorial
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Chen, Min; Zhang, Hao (Richard); Chen, Min and Zhang, Hao (Richard)
  • Item
    Digital Fabrication Techniques for Cultural Heritage: A Survey
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Scopigno, R.; Cignoni, P.; Pietroni, N.; Callieri, M.; Dellepiane, M.; Chen, Min and Zhang, Hao (Richard)
    Digital fabrication devices exploit basic technologies in order to create tangible reproductions of 3D digital models. Although current 3D printing pipelines still suffer from several restrictions, accuracy in reproduction has reached an excellent level. The manufacturing industry has been the main domain of 3D printing applications over the last decade. Digital fabrication techniques have also been demonstrated to be effective in many other contexts, including the consumer domain. The Cultural Heritage is one of the new application contexts and is an ideal domain to test the flexibility and quality of this new technology. This survey overviews the various fabrication technologies, discussing their strengths, limitations and costs. Various successful uses of 3D printing in the Cultural Heritage are analysed, which should also be useful for other application contexts. We review works that have attempted to extend fabrication technologies in order to deal with the specific issues in the use of digital fabrication in the Cultural Heritage. Finally, we also propose areas for future research.Digital fabrication devices exploit basic technologies in order to create tangible reproductions of 3D digital models. Although current 3D printing pipelines still suffer from several restrictions, accuracy in reproduction has reached an excellent level. The manufacturing industry has been themain domain of 3D printing applications over the last decade.Digital fabrication techniques have also been demonstrated to be effective in many other contexts, including the consumer domain. The Cultural Heritage is one of the new application contexts and is an ideal domain to test the flexibility and quality of this new technology.
  • Item
    Inversion Fractals and Iteration Processes in the Generation of Aesthetic Patterns
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Gdawiec, K.; Chen, Min and Zhang, Hao (Richard)
    In this paper, we generalize the idea of star‐shaped set inversion fractals using iterations known from fixed point theory. We also extend the iterations from real parameters to so‐called ‐system numbers and proposed the use of switching processes. All the proposed generalizations allowed us to obtain new and diverse fractal patterns that can be used, e.g. as textile and ceramics patterns. Moreover, we show that in the chaos game for iterated function systems—which is similar to the inversion fractals generation algorithm—the proposed generalizations do not give interesting results.In this paper, we generalize the idea of star‐shaped set inversion fractals using iterations known from fixed point theory. We also extend the iterations from real parameters to so‐called ‐system numbers and proposed the use of switching processes. All the proposed generalizations allowed us to obtain new and diverse fractal patterns that can be used, e.g. as textile and ceramics patterns. Moreover, we show that in the chaos game for iterated function systems—which is similar to the inversion fractals generation algorithm—the proposed generalizations do not give interesting results.
  • Item
    Sparse GPU Voxelization of Yarn‐Level Cloth
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Lopez‐Moreno, Jorge; Miraut, David; Cirio, Gabriel; Otaduy, Miguel A.; Chen, Min and Zhang, Hao (Richard)
    Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub‐surface scattering. These approaches are able to produce very realistic illumination results, but their volumetric representations are costly to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the Graphics Processing Unit (GPU) for voxelization and visualization, suitable for both interactive and offline rendering. Recent features in the OpenGL model, like the ability to dynamically address arbitrary buffers and allocate bindless textures, are combined into our pipeline to interactively voxelize millions of polygons into a set of large three‐dimensional (3D) textures (>10 elements), generating a volume with sub‐voxel accuracy, which is suitable even for high‐density woven cloth such as linen.Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub‐surface scattering. These approaches are able to produce very realistic illumination results, but their volumetric representations are costly to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the GPU for voxelization and visualization, suitable for both interactive and offline rendering. Recent features in the OpenGL model, like the ability to dynamically address arbitrary buffers and allocate bindless textures, are combined into our pipeline to interactively voxelize millions of polygons into a set of large three‐dimensional (3D) textures (>10 elements), generating a volume with sub‐voxel accuracy, which is suitable even for high‐density woven cloth such as linen.
  • Item
    A Survey of Visualization for Live Cell Imaging
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Pretorius, A. J.; Khan, I. A.; Errington, R. J.; Chen, Min and Zhang, Hao (Richard)
    Live cell imaging is an important biomedical research paradigm for studying dynamic cellular behaviour. Although phenotypic data derived from images are difficult to explore and analyse, some researchers have successfully addressed this with visualization. Nonetheless, visualization methods for live cell imaging data have been reported in an ad hoc and fragmented fashion. This leads to a knowledge gap where it is difficult for biologists and visualization developers to evaluate the advantages and disadvantages of different visualization methods, and for visualization researchers to gain an overview of existing work to identify research priorities. To address this gap, we survey existing visualization methods for live cell imaging from a visualization research perspective for the first time. Based on recent visualization theory, we perform a structured qualitative analysis of visualization methods that includes characterizing the domain and data, abstracting tasks, and describing visual encoding and interaction design. Based on our survey, we identify and discuss research gaps that future work should address: the broad analytical context of live cell imaging; the importance of behavioural comparisons; links with dynamic data visualization; the consequences of different data modalities; shortcomings in interactive support; and, in addition to analysis, the value of the presentation of phenotypic data and insights to other stakeholders.Live cell imaging is an important biomedical research paradigm for studying dynamic cellular behaviour. Although phenotypic data derived from images are difficult to explore and analyse, some researchers have successfully addressed this with visualization. Nonetheless, visualization methods for live cell imaging data have been reported in an ad hoc and fragmented fashion. This leads to a knowledge gap where it is difficult for biologists and visualization developers to evaluate the advantages and disadvantages of different visualization methods, and for visualization researchers to gain an overview of existing work to identify research priorities. To address this gap, we survey existing visualization methods for live cell imaging from a visualization research perspective for the first time.
  • Item
    Synthesizing Ornamental Typefaces
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Zhang, Junsong; Wang, Yu; Xiao, Weiyi; Luo, Zhenshan; Chen, Min and Zhang, Hao (Richard)
    We present a method for creating ornamental typeface images. Ornamental typefaces are a composite artwork made from the assemblage of images that carry similar semantics to words. These appealing word‐art works often attract the attention of more people and convey more meaningful information than general typefaces. However, traditional ornamental typefaces are usually created by skilled artists, which involves tedious manual processes, especially when searching for appropriate materials and assembling them. Hence, we aim to provide an easy way to create ornamental typefaces for novices. How to combine users' design intentions with image semantic and shape information to obtain readable and appealing ornamental typefaces is the key challenge to generate ornamental typefaces. To address this problem, we first provide a scribble‐based interface for users to segment the input typeface into strokes according to their design concepts. To ensure the consistency of the image semantics and stroke shape, we then define a semantic‐shape similarity metric to select a set of suitable images. Finally, to beautify the typeface structure, an optional optimal strategy is investigated. Experimental results and user studies show that the proposed algorithm effectively generates attractive and readable ornamental typefaces.We present a method for creating ornamental typeface images. Ornamental typefaces are a composite artwork made from the assemblage of images that carry similar semantics to words. These appealing word‐art works often attract the attention of more people and convey more meaningful information than general typefaces. However, traditional ornamental typefaces are usually created by skilled artists, which involves tedious manual processes, especially when searching for appropriate materials and assembling them. Hence, we aim to provide an easy way to create ornamental typefaces for novices.
  • Item
    Discovering Structured Variations Via Template Matching
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Ceylan, Duygu; Dang, Minh; Mitra, Niloy J.; Neubert, Boris; Pauly, Mark; Chen, Min and Zhang, Hao (Richard)
    Understanding patterns of variation from raw measurement data remains a central goal of shape analysis. Such an understanding reveals which elements are repeated, or how elements can be derived as structured variations from a common base element. We investigate this problem in the context of 3D acquisitions of buildings. Utilizing a set of template models, we discover geometric similarities across a set of building elements. Each template is equipped with a deformation model that defines variations of a base geometry. Central to our algorithm is a simultaneous template matching and deformation analysis that detects patterns across building elements by extracting similarities in the deformation modes of their matching templates. We demonstrate that such an analysis can successfully detect structured variations even for noisy and incomplete data. Understanding patterns of variation from raw measurement data remains a central goal of shape analysis. Such an understanding reveals which elements are repeated, or how elements can be derived as structured variations from a common base element. We investigate this problem in the context of 3D acquisitions of buildings. Utilizing a set of template models, we discover geometric similarities across a set of building elements. Each template is equipped with a deformation model that defines variations of a base geometry.
  • Item
    A Taxonomy and Survey of Dynamic Graph Visualization
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Beck, Fabian; Burch, Michael; Diehl, Stephan; Weiskopf, Daniel; Chen, Min and Zhang, Hao (Richard)
    Dynamic graph visualization focuses on the challenge of representing the evolution of relationships between entities in readable, scalable and effective diagrams. This work surveys the growing number of approaches in this discipline. We derive a hierarchical taxonomy of techniques by systematically categorizing and tagging publications. While static graph visualizations are often divided into node‐link and matrix representations, we identify the representation of time as the major distinguishing feature for dynamic graph visualizations: either graphs are represented as animated diagrams or as static charts based on a timeline. Evaluations of animated approaches focus on dynamic stability for preserving the viewer's mental map or, in general, compare animated diagrams to timeline‐based ones. A bibliographic analysis provides insights into the organization and development of the field and its community. Finally, we identify and discuss challenges for future research. We also provide feedback from experts, collected with a questionnaire, which gives a broad perspective of these challenges and the current state of the field.Dynamic graph visualization focuses on the challenge of representing the evolution of relationships between entities in readable, scalable and effective diagrams. This work surveys the growing number of approaches in this discipline. We derive a hierarchical taxonomy of techniques by systematically categorizing and tagging publications. While static graph visualizations are often divided into node‐link and matrix representations, we identify the representation of time as the major distinguishing feature for dynamic graph visualizations: either graphs are represented as animated diagrams or as static charts based on a timeline. Evaluations of animated approaches focus on dynamic stability for preserving the viewer's mental map or, in general, compare animated diagrams to timeline‐based ones.
  • Item
    Predicting Visual Perception of Material Structure in Virtual Environments
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Filip, J.; Vávra, R.; Havlíček, M.; Krupička, M.; Chen, Min and Zhang, Hao (Richard)
    One of the most accurate yet still practical representation of material appearance is the Bidirectional Texture Function (BTF). The BTF can be viewed as an extension of Bidirectional Reflectance Distribution Function (BRDF) for additional spatial information that includes local visual effects such as shadowing, interreflection, subsurface‐scattering, etc. However, the shift from BRDF to BTF represents not only a huge leap in respect to the realism of material reproduction, but also related high memory and computational costs stemming from the storage and processing of massive BTF data. In this work, we argue that each opaque material, regardless of its surface structure, can be safely substituted by a BRDF without the introduction of a significant perceptual error when viewed from an appropriate distance. Therefore, we ran a set of psychophysical studies over 25 materials to determine so‐called critical viewing distances, i.e. the minimal distances at which the material spatial structure (texture) cannot be visually discerned. Our analysis determined such typical distances typical for several material categories often used in interior design applications. Furthermore, we propose a combination of computational features that can predict such distances without the need for a psychophysical study. We show that our work can significantly reduce rendering costs in applications that process complex virtual scenes.One of the most accurate yet still practical representation of material appearance is the Bidirectional Texture Function (BTF). The BTF can be viewed as an extension of Bidirectional Reflectance Distribution Function (BRDF) for additional spatial information that includes local visual effects such as shadowing, interreflection, subsurface‐scattering, etc. However, the shift from BRDF to BTF represents not only a huge leap in respect to the realism of material reproduction, but also related high memory and computational costs stemming from the storage and processing of massive BTF data. In this work, we argue that each opaque material, regardless of its surface structure, can be safely substituted by a BRDF without the introduction of a significant perceptual error when viewed from an appropriate distance. Therefore, we ran a set of psychophysical studies over 25 materials to determine so‐called critical viewing distances, i.e. the minimal distances at which the material spatial structure (texture) cannot be visually discerned. Our analysis determined such typical distances typical for several material categories often used in interior design applications.
  • Item
    Data‐Driven Shape Analysis and Processing
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Xu, Kai; Kim, Vladimir G.; Huang, Qixing; Kalogerakis, Evangelos; Chen, Min and Zhang, Hao (Richard)
    Data‐driven methods serve an increasingly important role in discovering geometric, structural and semantic relationships between shapes. In contrast to traditional approaches that process shapes in isolation of each other, data‐driven methods aggregate information from 3D model collections to improve the analysis, modelling and editing of shapes. Data‐driven methods are also able to learn computational models that reason about properties and relationships of shapes without relying on hard‐coded rules or explicitly programmed instructions. Through reviewing the literature, we provide an overview of the main concepts and components of these methods, as well as discuss their application to classification, segmentation, matching, reconstruction, modelling and exploration, as well as scene analysis and synthesis. We conclude our report with ideas that can inspire future research in data‐driven shape analysis and processing.Data‐driven methods serve an increasingly important role in discovering geometric, structural and semantic relationships between shapes. In contrast to traditional approaches that process shapes in isolation of each other, data‐driven methods aggregate information from 3D model collections to improve the analysis, modelling and editing of shapes. Data‐driven methods are also able to learn computational models that reason about properties and relationships of shapes without relying on hard‐coded rules or explicitly programmed instructions. Through reviewing the literature, we provide an overview of the main concepts and components of these methods, as well as discuss their application to classification, segmentation, matching, reconstruction, modelling and exploration, as well as scene analysis and synthesis. We conclude our report with ideas that can inspire future research in data‐driven shape analysis and processing.
  • Item
    Multi-Modal Perception for Selective Rendering
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Harvey, Carlo; Debattista, Kurt; Bashford-Rogers, Thomas; Chalmers, Alan; Chen, Min and Zhang, Hao (Richard)
    A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps. The multi‐modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi‐modal VEs.A major challenge in generating high‐fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high‐fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high‐fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi‐modal maps.
  • Item
    Accurate and Efficient Computation of Laplacian Spectral Distances and Kernels
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Patané, Giuseppe; Chen, Min and Zhang, Hao (Richard)
    This paper introduces the Laplacian spectral distances, as a function that resembles the usual distance map, but exhibits properties (e.g. smoothness, locality, invariance to shape transformations) that make them useful to processing and analysing geometric data. Spectral distances are easily defined through a filtering of the Laplacian eigenpairs and reduce to the heat diffusion, wave, biharmonic and commute‐time distances for specific filters. In particular, the smoothness of the spectral distances and the encoding of local and global shape properties depend on the convergence of the filtered eigenvalues to zero. Instead of applying a truncated spectral approximation or prolongation operators, we propose a computation of Laplacian distances and kernels through the solution of sparse linear systems. Our approach is free of user‐defined parameters, overcomes the evaluation of the Laplacian spectrum and guarantees a higher approximation accuracy than previous work.
  • Item
    Visualization and Quantification for Interactive Analysis of Neural Connectivity in Drosophila
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Swoboda, N.; Moosburner, J.; Bruckner, S.; Yu, J. Y.; Dickson, B. J.; Bühler, K.; Chen, Min and Zhang, Hao (Richard)
    Neurobiologists investigate the brain of the common fruit fly to discover neural circuits and link them to complex behaviour. Formulating new hypotheses about connectivity requires potential connectivity information between individual neurons, indicated by overlaps of arborizations of two or more neurons. As the number of higher order overlaps (i.e. overlaps of three or more arborizations) increases exponentially with the number of neurons under investigation, visualization is impeded by clutter and quantification becomes a burden. Existing solutions are restricted to visual or quantitative analysis of pairwise overlaps, as they rely on precomputed overlap data. We present a novel tool that complements existing methods for potential connectivity exploration by providing for the first time the possibility to compute and visualize higher order arborization overlaps on the fly and to interactively explore this information in both its spatial anatomical context and on a quantitative level. Qualitative evaluation by neuroscientists and non‐experts demonstrated the utility and usability of the tool.Neurobiologists investigate the brain of the common fruit fly Drosophila melanogaster to discover neural circuits and link them to complex behaviour. Formulating new hypotheses about connectivity requires potential connectivity information between individual neurons, indicated by overlaps of arborizations of two or more neurons. As the number of higher order overlaps (i.e. overlaps of three or more arborizations) increases exponentially with the number of neurons under investigation, visualization is impeded by clutter and quantification becomes a burden.
  • Item
    Towards Globally Optimal Normal Orientations for Large Point Clouds
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Schertler, Nico; Savchynskyy, Bogdan; Gumhold, Stefan; Chen, Min and Zhang, Hao (Richard)
    Various processing algorithms on point set surfaces rely on consistently oriented normals (e.g. Poisson surface reconstruction). While several approaches exist for the calculation of normal directions, in most cases, their orientation has to be determined in a subsequent step. This paper generalizes propagation‐based approaches by reformulating the task as a graph‐based energy minimization problem. By applying global solvers, we can achieve more consistent orientations than simple greedy optimizations. Furthermore, we present a streaming‐based framework for orienting large point clouds. This framework orients patches locally and generates a globally consistent patch orientation on a reduced neighbour graph, which achieves similar quality to orienting the full graph.Various processing algorithms on point set surfaces rely on consistently oriented normals (e.g. Poisson surface reconstruction).While several approaches exist for the calculation of normal directions, in most cases, their orientation has to be determined in a subsequent step. This paper generalizes propagation‐based approaches by reformulating the task as a graph‐based energy minimization problem and presents a streaming‐based out‐of‐core implementation.
  • Item
    Output-Sensitive Filtering of Streaming Volume Data
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Solteszova, Veronika; Birkeland, Åsmund; Stoppel, Sergej; Viola, Ivan; Bruckner, Stefan; Chen, Min and Zhang, Hao (Richard)
    Real‐time volume data acquisition poses substantial challenges for the traditional visualization pipeline where data enhancement is typically seen as a pre‐processing step. In the case of 4D ultrasound data, for instance, costly processing operations to reduce noise and to remove artefacts need to be executed for every frame. To enable the use of high‐quality filtering operations in such scenarios, we propose an output‐sensitive approach to the visualization of streaming volume data. Our method evaluates the potential contribution of all voxels to the final image, allowing us to skip expensive processing operations that have little or no effect on the visualization. As filtering operations modify the data values which may affect the visibility, our main contribution is a fast scheme to predict their maximum effect on the final image. Our approach prioritizes filtering of voxels with high contribution to the final visualization based on a maximal permissible error per pixel. With zero permissible error, the optimized filtering will yield a result that is identical to filtering of the entire volume. We provide a thorough technical evaluation of the approach and demonstrate it on several typical scenarios that require on‐the‐fly processing.Real‐time volume data acquisition poses substantial challenges for the traditional visualization pipeline where data enhancement is typically seen as a pre‐processing step. In the case of 4D ultrasound data, for instance, costly processing operations to reduce noise and to remove artefacts need to be executed for every frame. To enable the use of high‐quality filtering operations in such scenarios, we propose an outputsensitive approach to the visualization of streaming volume data. Our method evaluates the potential contribution of all voxels to the final image, allowing us to skip expensive processing operations that have little or no effect on the visualization As filtering operations modify the data values which may affect the visibility, our main contribution is a fast scheme to predict their maximum effect on the final image. Our approach prioritizes filtering of voxels with high contribution to the final visualization based on a maximal permissible error per pixel. With zero permissible error, the optimized filtering will yield a result that is identical to filtering of the entire volume. We provide a thorough technical evaluation of the approach and demonstrate it on several typical scenarios that require on‐the‐fly processing.
  • Item
    Consistent Partial Matching of Shape Collections via Sparse Modeling
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Cosmo, L.; Rodolà, E.; Albarelli, A.; Mémoli, F.; Cremers, D.; Chen, Min and Zhang, Hao (Richard)
    Recent efforts in the area of joint object matching approach the problem by taking as input a set of pairwise maps, which are then jointly optimized across the whole collection so that certain accuracy and consistency criteria are satisfied. One natural requirement is cycle‐consistency—namely the fact that map composition should give the same result regardless of the path taken in the shape collection. In this paper, we introduce a novel approach to obtain consistent matches without requiring initial pairwise solutions to be given as input. We do so by optimizing a joint measure of metric distortion directly over the space of cycle‐consistent maps; in order to allow for partially similar and extra‐class shapes, we formulate the problem as a series of quadratic programs with sparsity‐inducing constraints, making our technique a natural candidate for analysing collections with a large presence of outliers. The particular form of the problem allows us to leverage results and tools from the field of evolutionary game theory. This enables a highly efficient optimization procedure which assures accurate and provably consistent solutions in a matter of minutes in collections with hundreds of shapes.Recent efforts in the area of joint object matching approach the problem by taking as input a set of pairwise maps, which are then jointly optimized across the whole collection so that certain accuracy and consistency criteria are satisfied. One natural requirement is cycleconsistency— namely the fact that map composition should give the same result regardless of the path taken in the shape collection. In this paper, we introduce a novel approach to obtain among partially similar shapes consistent matches without requiring initial pairwise solutions to be given as input.
  • Item
    Constructive Visual Analytics for Text Similarity Detection
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Abdul-Rahman, A.; Roe, G.; Olsen, M.; Gladstone, C.; Whaling, R.; Cronk, N.; Morrissey, R.; Chen, M.; Chen, Min and Zhang, Hao (Richard)
    Detecting similarity between texts is a frequently encountered text mining task. Because the measurement of similarity is typically composed of a number of metrics, and some measures are sensitive to subjective interpretation, a generic detector obtained using machine learning often has difficulties balancing the roles of different metrics according to the semantic context exhibited in a specific collection of texts. In order to facilitate human interaction in a visual analytics process for text similarity detection, we first map the problem of pairwise sequence comparison to that of image processing, allowing patterns of similarity to be visualized as a 2D pixelmap. We then devise a visual interface to enable users to construct and experiment with different detectors using primitive metrics, in a way similar to constructing an image processing pipeline. We deployed this new approach for the identification of commonplaces in 18th‐century literary and print culture. Domain experts were then able to make use of the prototype system to derive new scholarly discoveries and generate new hypotheses.Detecting similarity between texts is a frequently encountered text mining task. Because the measurement of similarity is typically composed of a number of metrics, and some measures are sensitive to subjective interpretation, a generic detector obtained using machine learning often has difficulties balancing the roles of different metrics according to the semantic context exhibited in a specific collection of texts. In order to facilitate human interaction in a visual analytics process for text similarity detection, we first map the problem of pairwise sequence comparison to that of image processing, allowing patterns of similarity to be visualized as a 2D pixelmap.We then devise a visual interface to enable users to construct and experiment with different detectors using primitive metrics, in a way similar to constructing an image processing pipeline. We deployed this new approach for the identification of commonplaces in 18th‐century literary and print culture. Domain experts were then able to make use of the prototype system to derive new scholarly discoveries and generate new hypotheses.
  • Item
    Partial Functional Correspondence
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Rodolà, E.; Cosmo, L.; Bronstein, M. M.; Torsello, A.; Cremers, D.; Chen, Min and Zhang, Hao (Richard)
    In this paper, we propose a method for computing partial functional correspondence between non‐rigid shapes. We use perturbation analysis to show how removal of shape parts changes the Laplace–Beltrami eigenfunctions, and exploit it as a prior on the spectral representation of the correspondence. Corresponding parts are optimization variables in our problem and are used to weight the functional correspondence; we are looking for the largest and most regular (in the Mumford–Shah sense) parts that minimize correspondence distortion. We show that our approach can cope with very challenging correspondence settings.In this paper, we propose a method for computing partial functional correspondence between non‐rigid shapes. We use perturbation analysis to show how removal of shape parts changes the Laplace‐Beltrami eigenfunctions, and exploit it as a prior on the spectral representation of the correspondence. Corresponding parts are optimization variables in our problem and are used to weight the functional correspondence; we are looking for the largest and most regular (in the Mumford‐Shah sense) parts that minimize correspondence distortion. We show that our approach can cope with very challenging correspondence settings.
  • Item
    Synthesis of Human Skin Pigmentation Disorders
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Barros, R. S.; Walter, M.; Chen, Min and Zhang, Hao (Richard)
    Changes in the human pigmentary system can lead to imbalances in the distribution of melanin in the skin resulting in artefacts known as pigmented lesions. Our work takes as departing point biological data regarding human skin, the pigmentary system and the melanocytes life cycle and presents a reaction–diffusion model for the simulation of the shape features of human‐pigmented lesions. The simulation of such disorders has many applications in dermatology, for instance, to assist dermatologists in diagnosis and training related to pigmentation disorders. Our study focuses, however, on applications related to computer graphics. Thus, we also present a method to seamless blend the results of our simulation model in images of healthy human skin. In this context, our model contributes to the generation of more realistic skin textures and therefore more realistic human models. In order to assess the quality of our results, we measured and compared the characteristics of the shape of real and synthesized pigmented lesions. We show that synthesized and real lesions have no statistically significant differences in their shape features. Visually, our results also compare favourably with images of real lesions, being virtually indistinguishable from real images.Changes in the human pigmentary system can lead to imbalances in the distribution of melanin in the skin resulting in artefacts known as pigmented lesions. Our work takes as departing point biological data regarding human skin, the pigmentary system and the melanocytes life cycle and presents a reaction‐diffusion model for the simulation of the shape features of human‐pigmented lesions. The simulation of such disorders has many applications in dermatology, for instance, to assist dermatologists in diagnosis and training related to pigmentation disorders. Our study focuses, however, on applications related to computer graphics. Thus, we also present a method to seamless blend the results of our simulation model in images of healthy human skin.
  • Item
    Graphs in Scientific Visualization: A Survey
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Wang, Chaoli; Tao, Jun; Chen, Min and Zhang, Hao (Richard)
    Graphs represent general node‐link diagrams and have long been utilized in scientific visualization for data organization and management. However, using graphs as a visual representation and interface for navigating and exploring scientific data sets has a much shorter history, yet the amount of work along this direction is clearly on the rise in recent years. In this paper, we take a holistic perspective and survey graph‐based representations and techniques for scientific visualization. Specifically, we classify these representations and techniques into four categories, namely partition‐wise, relationship‐wise, structure‐wise and provenance‐wise. We survey related publications in each category, explaining the roles of graphs in related work and highlighting their similarities and differences. At the end, we reexamine these related publications following the graph‐based visualization pipeline. We also point out research trends and remaining challenges in graph‐based representations and techniques for scientific visualization.Graphs represent general node‐link diagrams and have long been utilized in scientific visualization for data organization and management. However, using graphs as a visual representation and interface for navigating and exploring scientific data sets has a much shorter history, yet the amount of work along this direction is clearly on the rise in recent years. In this paper, we take a holistic perspective and survey graph‐based representations and techniques for scientific visualization.
  • Item
    Constrained Convex Space Partition for Ray Tracing in Architectural Environments
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Maria, M.; Horna, S.; Aveneau, L.; Chen, Min and Zhang, Hao (Richard)
    This paper explores constrained convex space partition (CCSP) as a new acceleration structure for ray tracing. A CCSP is a graph, representing a space partition made up of empty convex volumes. The scene geometry is located on the boundary of the convex volumes. Therefore, each empty volume is bounded with two kinds of faces: occlusive ones (belonging to the scene geometry), and non‐occlusive ones. Given a ray, ray casting is performed by traversing the CCSP one volume at a time, until it hits the scene geometry. In this paper, this idea is applied to architectural scenes. We show that CCSP allows to cast several hundreds of millions of rays per second, even if they are not spatially coherent. Experiments are performed for large furnished buildings made up of hundreds of millions of polygons and containing thousands of light sources.This paper explores constrained convex space partition (CCSP) as a new acceleration structure for ray tracing. A CCSP is a graph, representing a space partition made up of empty convex volumes. The scene geometry is located on the boundary of the convex volumes. Therefore, each empty volume is bounded with two kinds of faces: occlusive ones (belonging to the scene geometry), and non‐occlusive ones. Given a ray, ray casting is performed by traversing the CCSP one volume at a time, until it hits the scene geometry. In this paper, this idea is applied to architectural scenes.We show that CCSP allows to cast several hundreds of millions of rays per second, even if they are not spatially coherent. Experiments are performed for large furnished buildings made up of hundreds of millions of polygons and containing thousands of light sources.
  • Item
    A Survey of Surface Reconstruction from Point Clouds
    (© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Berger, Matthew; Tagliasacchi, Andrea; Seversky, Lee M.; Alliez, Pierre; Guennebaud, Gaël; Levine, Joshua A.; Sharf, Andrei; Silva, Claudio T.; Chen, Min and Zhang, Hao (Richard)
    The area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contain a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece‐wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations—not necessarily the explicit geometry. We survey the field of surface reconstruction, and provide a categorization with respect to priors, data imperfections and reconstruction output. By considering a holistic view of surface reconstruction, we show a detailed characterization of the field, highlight similarities between diverse reconstruction techniques and provide directions for future work in surface reconstruction.The area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contain a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece‐wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations—not necessarily the explicit geometry