37-Issue 1

Permanent URI for this collection

Issue Information

Issue Information

Articles

CorrelatedMultiples: Spatially Coherent Small Multiples With Constrained Multi‐Dimensional Scaling

Liu, Xiaotong
Hu, Yifan
North, Stephen
Shen, Han‐Wei
Report

2018 Cover Image: Thingi10K

Zhou, Qingnan
Jacobson, Alec
Editorial

Editorial

Chen, Min
Benes, Bedrich
Articles

Story Albums: Creating Fictional Stories From Personal Photograph Sets

Radiano, O.
Graber, Y.
Mahler, M.
Sigal, L.
Shamir, A.
Articles

CPU–GPU Parallel Framework for Real‐Time Interactive Cutting of Adaptive Octree‐Based Deformable Objects

Jia, Shiyu
Zhang, Weizhong
Yu, Xiaokang
Pan, Zhenkuan
Articles

ARAPLBS: Robust and Efficient Elasticity‐Based Optimization of Weights and Skeleton Joints for Linear Blend Skinning with Parametrized Bones

Thiery, J.‐M.
Eisemann, E.
Articles

The State of the Art in Sentiment Visualization

Kucher, Kostiantyn
Paradis, Carita
Kerren, Andreas
Articles

Super‐Resolution of Point Set Surfaces Using Local Similarities

Hamdi‐Cherif, Azzouz
Digne, Julie
Chaine, Raphaëlle
Articles

Easy Generation of Facial Animation Using Motion Graphs

Serra, J.
Cetinaslan, O.
Ravikumar, S.
Orvalho, V.
Cosker, D.
Articles

Data Abstraction for Visualizing Large Time Series

Shurkhovetskyy, G.
Andrienko, N.
Andrienko, G.
Fuchs, G.
Articles

Peridynamics‐Based Fracture Animation for Elastoplastic Solids

Chen, Wei
Zhu, Fei
Zhao, Jing
Li, Sheng
Wang, Guoping
Articles

Enhanced Visualization of Detected 3D Geometric Differences

Palma, Gianpaolo
Sabbadin, Manuele
Corsini, Massimiliano
Cignoni, Paolo
Articles

On the Stability of Functional Maps and Shape Difference Operators

Huang, R.
Chazal, F.
Ovsjanikov, M.
Articles

Audiovisual Resource Allocation for Bimodal Virtual Environments

Doukakis, E.
Debattista, K.
Harvey, C.
Bashford‐Rogers, T.
Chalmers, A.
Articles

CLUST: Simulating Realistic Crowd Behaviour by Mining Pattern from Crowd Videos

Zhao, M.
Cai, W.
Turner, S. J.
Articles

Realistic Ultrasound Simulation of Complex Surface Models Using Interactive Monte‐Carlo Path Tracing

Mattausch, Oliver
Makhinya, Maxim
Goksel, Orcun
Articles

Tree Growth Modelling Constrained by Growth Equations

Yi, Lei
Li, Hongjun
Guo, Jianwei
Deussen, Oliver
Zhang, Xiaopeng
Articles

Enhancing the Realism of Sketch and Painted Portraits With Adaptable Patches

Lee, Yin‐Hsuan
Chang, Yu‐Kai
Chang, Yu‐Lun
Lin, I‐Chen
Wang, Yu‐Shuen
Lin, Wen‐Chieh
Articles

Guidelines for Quantitative Evaluation of Medical Visualizations on the Example of 3D Aneurysm Surface Comparisons

Saalfeld, P.
Luz, M.
Berg, P.
Preim, B.
Saalfeld, S.
Articles

A Visualization Framework and User Studies for Overloaded Orthogonal Drawings

Didimo, Walter
Kornaropoulos, Evgenios M.
Montecchiani, Fabrizio
Tollis, Ioannis G.
Articles

Human Factors in Streaming Data Analysis: Challenges and Opportunities for Information Visualization

Dasgupta, Aritra
Arendt, Dustin L.
Franklin, Lyndsey R.
Wong, Pak Chung
Cook, Kristin A.
Articles

Stereo‐Consistent Contours in Object Space

Bukenberger, Dennis R.
Schwarz, Katharina
Lensch, Hendrik P. A.
Articles

Improved Corners with Multi‐Channel Signed Distance Fields

Chlumský, V.
Sloup, J.
Šimeček, I.
Articles

Uniformization and Density Adaptation for Point Cloud Data Via Graph Laplacian

Luo, Chuanjiang
Ge, Xiaoyin
Wang, Yusu
Articles

An Efficient Hybrid Incompressible SPH Solver with Interface Handling for Boundary Conditions

Takahashi, Tetsuya
Dobashi, Yoshinori
Nishita, Tomoyuki
Lin, Ming C.
Articles

Large‐Scale Pixel‐Precise Deferred Vector Maps

Thöny, Matthias
Billeter, Markus
Pajarola, Renato
Articles

Olfaction and Selective Rendering

Harvey, Carlo
Bashford‐Rogers, Thomas
Debattista, Kurt
Doukakis, Efstratios
Chalmers, Alan
Articles

Frame Rate vs Resolution: A Subjective Evaluation of Spatiotemporal Perceived Quality Under Varying Computational Budgets

Debattista, K.
Bugeja, K.
Spina, S.
Bashford‐Rogers, T.
Hulusic, V.
Articles

ProactiveCrowd: Modelling Proactive Steering Behaviours for Agent‐Based Crowd Simulation

Luo, Linbo
Chai, Cheng
Ma, Jianfeng
Zhou, Suiping
Cai, Wentong
Articles

Interactive Large‐Scale Procedural Forest Construction and Visualization Based on Particle Flow Simulation

Kohek, Štefan
Strnad, Damjan
Articles

A Survey on Multimodal Medical Data Visualization

Lawonn, K.
Smit, N.N.
Bühler, K.
Preim, B.
Articles

Distinctive Approaches to Computer Graphics Education

Santos, B. Sousa
Dischler, J.‐M.
Svobodova, L.
Wimmer, M.
Zara, J.
Adzhiev, V.
Anderson, E.F.
Ferko, A.
Fryazinov, O.
Ilčík, M.
Ilčíková, I.
Slavik, P.
Sundstedt, V.
Articles

Application‐Specific Tone Mapping Via Genetic Programming

Debattista, K.


BibTeX (37-Issue 1)
                
@article{
10.1111:cgf.13299,
journal = {Computer Graphics Forum}, title = {{
Issue Information}},
author = {}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13299}
}
                
@article{
10.1111:cgf.12526,
journal = {Computer Graphics Forum}, title = {{
CorrelatedMultiples: Spatially Coherent Small Multiples With Constrained Multi‐Dimensional Scaling}},
author = {
Liu, Xiaotong
 and
Hu, Yifan
 and
North, Stephen
 and
Shen, Han‐Wei
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12526}
}
                
@article{
10.1111:cgf.13328,
journal = {Computer Graphics Forum}, title = {{
2018 Cover Image: Thingi10K}},
author = {
Zhou, Qingnan
 and
Jacobson, Alec
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13328}
}
                
@article{
10.1111:cgf.13330,
journal = {Computer Graphics Forum}, title = {{
Editorial}},
author = {
Chen, Min
 and
Benes, Bedrich
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13330}
}
                
@article{
10.1111:cgf.13099,
journal = {Computer Graphics Forum}, title = {{
Story Albums: Creating Fictional Stories From Personal Photograph Sets}},
author = {
Radiano, O.
 and
Graber, Y.
 and
Mahler, M.
 and
Sigal, L.
 and
Shamir, A.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13099}
}
                
@article{
10.1111:cgf.13162,
journal = {Computer Graphics Forum}, title = {{
CPU–GPU Parallel Framework for Real‐Time Interactive Cutting of Adaptive Octree‐Based Deformable Objects}},
author = {
Jia, Shiyu
 and
Zhang, Weizhong
 and
Yu, Xiaokang
 and
Pan, Zhenkuan
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13162}
}
                
@article{
10.1111:cgf.13161,
journal = {Computer Graphics Forum}, title = {{
ARAPLBS: Robust and Efficient Elasticity‐Based Optimization of Weights and Skeleton Joints for Linear Blend Skinning with Parametrized Bones}},
author = {
Thiery, J.‐M.
 and
Eisemann, E.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13161}
}
                
@article{
10.1111:cgf.13217,
journal = {Computer Graphics Forum}, title = {{
The State of the Art in Sentiment Visualization}},
author = {
Kucher, Kostiantyn
 and
Paradis, Carita
 and
Kerren, Andreas
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13217}
}
                
@article{
10.1111:cgf.13216,
journal = {Computer Graphics Forum}, title = {{
Super‐Resolution of Point Set Surfaces Using Local Similarities}},
author = {
Hamdi‐Cherif, Azzouz
 and
Digne, Julie
 and
Chaine, Raphaëlle
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13216}
}
                
@article{
10.1111:cgf.13218,
journal = {Computer Graphics Forum}, title = {{
Easy Generation of Facial Animation Using Motion Graphs}},
author = {
Serra, J.
 and
Cetinaslan, O.
 and
Ravikumar, S.
 and
Orvalho, V.
 and
Cosker, D.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13218}
}
                
@article{
10.1111:cgf.13237,
journal = {Computer Graphics Forum}, title = {{
Data Abstraction for Visualizing Large Time Series}},
author = {
Shurkhovetskyy, G.
 and
Andrienko, N.
 and
Andrienko, G.
 and
Fuchs, G.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13237}
}
                
@article{
10.1111:cgf.13236,
journal = {Computer Graphics Forum}, title = {{
Peridynamics‐Based Fracture Animation for Elastoplastic Solids}},
author = {
Chen, Wei
 and
Zhu, Fei
 and
Zhao, Jing
 and
Li, Sheng
 and
Wang, Guoping
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13236}
}
                
@article{
10.1111:cgf.13239,
journal = {Computer Graphics Forum}, title = {{
Enhanced Visualization of Detected 3D Geometric Differences}},
author = {
Palma, Gianpaolo
 and
Sabbadin, Manuele
 and
Corsini, Massimiliano
 and
Cignoni, Paolo
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13239}
}
                
@article{
10.1111:cgf.13238,
journal = {Computer Graphics Forum}, title = {{
On the Stability of Functional Maps and Shape Difference Operators}},
author = {
Huang, R.
 and
Chazal, F.
 and
Ovsjanikov, M.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13238}
}
                
@article{
10.1111:cgf.13258,
journal = {Computer Graphics Forum}, title = {{
Audiovisual Resource Allocation for Bimodal Virtual Environments}},
author = {
Doukakis, E.
 and
Debattista, K.
 and
Harvey, C.
 and
Bashford‐Rogers, T.
 and
Chalmers, A.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13258}
}
                
@article{
10.1111:cgf.13259,
journal = {Computer Graphics Forum}, title = {{
CLUST: Simulating Realistic Crowd Behaviour by Mining Pattern from Crowd Videos}},
author = {
Zhao, M.
 and
Cai, W.
 and
Turner, S. J.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13259}
}
                
@article{
10.1111:cgf.13260,
journal = {Computer Graphics Forum}, title = {{
Realistic Ultrasound Simulation of Complex Surface Models Using Interactive Monte‐Carlo Path Tracing}},
author = {
Mattausch, Oliver
 and
Makhinya, Maxim
 and
Goksel, Orcun
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13260}
}
                
@article{
10.1111:cgf.13263,
journal = {Computer Graphics Forum}, title = {{
Tree Growth Modelling Constrained by Growth Equations}},
author = {
Yi, Lei
 and
Li, Hongjun
 and
Guo, Jianwei
 and
Deussen, Oliver
 and
Zhang, Xiaopeng
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13263}
}
                
@article{
10.1111:cgf.13261,
journal = {Computer Graphics Forum}, title = {{
Enhancing the Realism of Sketch and Painted Portraits With Adaptable Patches}},
author = {
Lee, Yin‐Hsuan
 and
Chang, Yu‐Kai
 and
Chang, Yu‐Lun
 and
Lin, I‐Chen
 and
Wang, Yu‐Shuen
 and
Lin, Wen‐Chieh
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13261}
}
                
@article{
10.1111:cgf.13262,
journal = {Computer Graphics Forum}, title = {{
Guidelines for Quantitative Evaluation of Medical Visualizations on the Example of 3D Aneurysm Surface Comparisons}},
author = {
Saalfeld, P.
 and
Luz, M.
 and
Berg, P.
 and
Preim, B.
 and
Saalfeld, S.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13262}
}
                
@article{
10.1111:cgf.13266,
journal = {Computer Graphics Forum}, title = {{
A Visualization Framework and User Studies for Overloaded Orthogonal Drawings}},
author = {
Didimo, Walter
 and
Kornaropoulos, Evgenios M.
 and
Montecchiani, Fabrizio
 and
Tollis, Ioannis G.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13266}
}
                
@article{
10.1111:cgf.13264,
journal = {Computer Graphics Forum}, title = {{
Human Factors in Streaming Data Analysis: Challenges and Opportunities for Information Visualization}},
author = {
Dasgupta, Aritra
 and
Arendt, Dustin L.
 and
Franklin, Lyndsey R.
 and
Wong, Pak Chung
 and
Cook, Kristin A.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13264}
}
                
@article{
10.1111:cgf.13291,
journal = {Computer Graphics Forum}, title = {{
Stereo‐Consistent Contours in Object Space}},
author = {
Bukenberger, Dennis R.
 and
Schwarz, Katharina
 and
Lensch, Hendrik P. A.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13291}
}
                
@article{
10.1111:cgf.13265,
journal = {Computer Graphics Forum}, title = {{
Improved Corners with Multi‐Channel Signed Distance Fields}},
author = {
Chlumský, V.
 and
Sloup, J.
 and
Šimeček, I.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13265}
}
                
@article{
10.1111:cgf.13293,
journal = {Computer Graphics Forum}, title = {{
Uniformization and Density Adaptation for Point Cloud Data Via Graph Laplacian}},
author = {
Luo, Chuanjiang
 and
Ge, Xiaoyin
 and
Wang, Yusu
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13293}
}
                
@article{
10.1111:cgf.13292,
journal = {Computer Graphics Forum}, title = {{
An Efficient Hybrid Incompressible SPH Solver with Interface Handling for Boundary Conditions}},
author = {
Takahashi, Tetsuya
 and
Dobashi, Yoshinori
 and
Nishita, Tomoyuki
 and
Lin, Ming C.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13292}
}
                
@article{
10.1111:cgf.13294,
journal = {Computer Graphics Forum}, title = {{
Large‐Scale Pixel‐Precise Deferred Vector Maps}},
author = {
Thöny, Matthias
 and
Billeter, Markus
 and
Pajarola, Renato
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13294}
}
                
@article{
10.1111:cgf.13295,
journal = {Computer Graphics Forum}, title = {{
Olfaction and Selective Rendering}},
author = {
Harvey, Carlo
 and
Bashford‐Rogers, Thomas
 and
Debattista, Kurt
 and
Doukakis, Efstratios
 and
Chalmers, Alan
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13295}
}
                
@article{
10.1111:cgf.13302,
journal = {Computer Graphics Forum}, title = {{
Frame Rate vs Resolution: A Subjective Evaluation of Spatiotemporal Perceived Quality Under Varying Computational Budgets}},
author = {
Debattista, K.
 and
Bugeja, K.
 and
Spina, S.
 and
Bashford‐Rogers, T.
 and
Hulusic, V.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13302}
}
                
@article{
10.1111:cgf.13303,
journal = {Computer Graphics Forum}, title = {{
ProactiveCrowd: Modelling Proactive Steering Behaviours for Agent‐Based Crowd Simulation}},
author = {
Luo, Linbo
 and
Chai, Cheng
 and
Ma, Jianfeng
 and
Zhou, Suiping
 and
Cai, Wentong
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13303}
}
                
@article{
10.1111:cgf.13304,
journal = {Computer Graphics Forum}, title = {{
Interactive Large‐Scale Procedural Forest Construction and Visualization Based on Particle Flow Simulation}},
author = {
Kohek, Štefan
 and
Strnad, Damjan
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13304}
}
                
@article{
10.1111:cgf.13306,
journal = {Computer Graphics Forum}, title = {{
A Survey on Multimodal Medical Data Visualization}},
author = {
Lawonn, K.
 and
Smit, N.N.
 and
Bühler, K.
 and
Preim, B.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13306}
}
                
@article{
10.1111:cgf.13305,
journal = {Computer Graphics Forum}, title = {{
Distinctive Approaches to Computer Graphics Education}},
author = {
Santos, B. Sousa
 and
Dischler, J.‐M.
 and
Svobodova, L.
 and
Wimmer, M.
 and
Zara, J.
 and
Adzhiev, V.
 and
Anderson, E.F.
 and
Ferko, A.
 and
Fryazinov, O.
 and
Ilčík, M.
 and
Ilčíková, I.
 and
Slavik, P.
 and
Sundstedt, V.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13305}
}
                
@article{
10.1111:cgf.13307,
journal = {Computer Graphics Forum}, title = {{
Application‐Specific Tone Mapping Via Genetic Programming}},
author = {
Debattista, K.
}, year = {
2018},
publisher = {
© 2018 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13307}
}

Browse

Recent Submissions

Now showing 1 - 34 of 34
  • Item
    Issue Information
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Chen, Min and Benes, Bedrich
  • Item
    CorrelatedMultiples: Spatially Coherent Small Multiples With Constrained Multi‐Dimensional Scaling
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Liu, Xiaotong; Hu, Yifan; North, Stephen; Shen, Han‐Wei; Chen, Min and Benes, Bedrich
    Displaying small multiples is a popular method for visually summarizing and comparing multiple facets of a complex data set. If the correlations between the data are not considered when displaying the multiples, searching and comparing specific items become more difficult since a sequential scan of the display is often required. To address this issue, we introduce CorrelatedMultiples, a spatially coherent visualization based on small multiples, where the items are placed so that the distances reflect their dissimilarities. We propose a constrained multi‐dimensional scaling (CMDS) solver that preserves spatial proximity while forcing the items to remain within a fixed region. We evaluate the effectiveness of our approach by comparing CMDS with other competing methods through a controlled user study and a quantitative study, and demonstrate the usefulness of CorrelatedMultiples for visual search and comparison in three real‐world case studies.
  • Item
    2018 Cover Image: Thingi10K
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Zhou, Qingnan; Jacobson, Alec; Chen, Min and Benes, Bedrich
  • Item
    Editorial
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Chen, Min; Benes, Bedrich; Chen, Min and Benes, Bedrich
  • Item
    Story Albums: Creating Fictional Stories From Personal Photograph Sets
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Radiano, O.; Graber, Y.; Mahler, M.; Sigal, L.; Shamir, A.; Chen, Min and Benes, Bedrich
    We present a method for the automatic creation of fictional storybooks based on personal photographs. Unlike previous attempts that summarize such collections by picking salient or diverse photos, or creating personal literal narratives, we focus on the creation of fictional stories. This provides new value to users, as well as an engaging way for people (especially children) to experience their own photographs. We use a graph model to represent an artist‐generated story, where each node is a ‘frame’, akin to frames in comics or storyboards. A node is described by story elements, comprising actors, location, supporting objects and time. The edges in the graph encode connections between these elements and provide the discourse of the story. Based on this construction, we develop a constraint satisfaction algorithm for one‐to‐one assignment of nodes to photographs. Once each node is assigned to a photograph, a visual depiction of the story can be generated in different styles using various templates. We show results of several fictional visual stories created from different personal photo sets and in different styles.We present a method for the automatic creation of fictional storybooks based on personal photographs. Unlike previous attempts that summarize such collections by picking salient or diverse photos, or creating personal literal narratives, we focus on the creation of fictional stories. This provides new value to users, as well as an engaging way for people (especially children) to experience their own photographs. We use a graph model to represent an artist‐generated story, where each node is a ‘frame’, akin to frames in comics or storyboards. A node is described by story elements, comprising actors, location, supporting objects and time. The edges in the graph encode connections between these elements and provide the discourse of the story. Based on this construction, we develop a constraint satisfaction algorithm for one‐to‐one assignment of nodes to photographs. Once each node is assigned to a photograph, a visual depiction of the story can be generated in different styles using various templates.
  • Item
    CPU–GPU Parallel Framework for Real‐Time Interactive Cutting of Adaptive Octree‐Based Deformable Objects
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Jia, Shiyu; Zhang, Weizhong; Yu, Xiaokang; Pan, Zhenkuan; Chen, Min and Benes, Bedrich
    A software framework taking advantage of parallel processing capabilities of CPUs and GPUs is designed for the real‐time interactive cutting simulation of deformable objects. Deformable objects are modelled as voxels connected by links. The voxels are embedded in an octree mesh used for deformation. Cutting is performed by disconnecting links swept by the cutting tool and then adaptively refining octree elements near the cutting tool trajectory. A surface mesh used for visual display is reconstructed from disconnected links using the dual contour method. Spatial hashing of the octree mesh and topology‐aware interpolation of distance field are used for collision. Our framework uses a novel GPU implementation for inter‐object collision and object self collision, while tool‐object collision, cutting and deformation are assigned to CPU, using multiple threads whenever possible. A novel method that splits cutting operations into four independent tasks running in parallel is designed. Our framework also performs data transfers between CPU and GPU simultaneously with other tasks to reduce their impact on performances. Simulation tests show that when compared to three‐threaded CPU implementations, our GPU accelerated collision is 53–160% faster; and the overall simulation frame rate is 47–98% faster.A software framework taking advantage of parallel processing capabilities of CPUs and GPUs is designed for real‐time interactive cutting simulation of adaptive octree‐based deformable objects. The framework uses a novel GPU implementation for inter‐object collision and object self collision, while other tasks are assigned to CPU, using multiple threads whenever possible. A novel method that splits cutting operations into 4 independent tasks running in parallel is designed. Simulation tests show that when compared to 3‐threaded CPU implementations, our GPU accelerated collision is 53% to 160% faster; and the overall simulation frame rate is 47% to 98% faster.
  • Item
    ARAPLBS: Robust and Efficient Elasticity‐Based Optimization of Weights and Skeleton Joints for Linear Blend Skinning with Parametrized Bones
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Thiery, J.‐M.; Eisemann, E.; Chen, Min and Benes, Bedrich
    We present a fast, robust and high‐quality technique to skin a mesh with reference to a skeleton. We consider the space of possible skeleton deformations (based on skeletal constraints, or skeletal animations), and compute skinning weights based on an optimization scheme to obtain as‐rigid‐as‐possible (ARAP) corresponding mesh deformations. We support stretchable‐and‐twistable bones (STBs) and spines by generalizing the ARAP deformations to stretchable deformers. In addition, our approach can optimize joint placements. If wanted, a user can guide and interact with the results, which is facilitated by an interactive feedback, reached via an efficient sparsification scheme. We demonstrate our technique on challenging inputs (STBs and spines, triangle and tetrahedral meshes featuring missing elements, boundaries, self‐intersections or wire edges).We present a fast, robust and high‐quality technique to skin a mesh with reference to a skeleton. We consider the space of possible skeleton deformations (based on skeletal constraints, or skeletal animations), and compute skinning weights based on an optimization scheme to obtain as‐rigid‐as‐possible (ARAP) corresponding mesh deformations. We support stretchable‐and‐twistable bones (STBs) and spines by generalizing the ARAP deformations to stretchable deformers. In addition, our approach can optimize joint placements. If wanted, a user can guide and interact with the results, which is facilitated by an interactive feedback, reached via an efficient sparsification scheme. We demonstrate our technique on challenging inputs (STBs and spines, triangle and tetrahedral meshes featuring missing elements, boundaries, self‐intersections or wire edges).
  • Item
    The State of the Art in Sentiment Visualization
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Kucher, Kostiantyn; Paradis, Carita; Kerren, Andreas; Chen, Min and Benes, Bedrich
    Visualization of sentiments and opinions extracted from or annotated in texts has become a prominent topic of research over the last decade. From basic pie and bar charts used to illustrate customer reviews to extensive visual analytics systems involving novel representations, sentiment visualization techniques have evolved to deal with complex multidimensional data sets, including temporal, relational and geospatial aspects. This contribution presents a survey of sentiment visualization techniques based on a detailed categorization. We describe the background of sentiment analysis, introduce a categorization for sentiment visualization techniques that includes 7 groups with 35 categories in total, and discuss 132 techniques from peer‐reviewed publications together with an interactive web‐based survey browser. Finally, we discuss insights and opportunities for further research in sentiment visualization. We expect this survey to be useful for visualization researchers whose interests include sentiment or other aspects of text data as well as researchers and practitioners from other disciplines in search of efficient visualization techniques applicable to their tasks and data.Visualization of sentiments and opinions extracted from or annotated in texts has become a prominent topic of research over the last decade. From basic pie and bar charts used to illustrate customer reviews to extensive visual analytics systems involving novel representations, sentiment visualization techniques have evolved to deal with complex multidimensional data sets, including temporal, relational and geospatial aspects. This contribution presents a survey of sentiment visualization techniques based on a detailed categorization. We describe the background of sentiment analysis, introduce a categorization for sentiment visualization techniques that includes 7 groups with 35 categories in total, and discuss 132 techniques from peer‐reviewed publications together with an interactive web‐based survey browser. Finally, we discuss insights and opportunities for further research in sentiment visualization.
  • Item
    Super‐Resolution of Point Set Surfaces Using Local Similarities
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Hamdi‐Cherif, Azzouz; Digne, Julie; Chaine, Raphaëlle; Chen, Min and Benes, Bedrich
    Three‐dimensional scanners provide a virtual representation of object surfaces at some given precision that depends on many factors such as the object material, the quality of the laser ray or the resolution of the camera. This precision may even vary over the surface, depending, for example, on the distance to the scanner which results in uneven and unstructured point sets, with an uncertainty on the coordinates. To enhance the quality of the scanner output, one usually resorts to local surface interpolation between measured points. However, object surfaces often exhibit interesting statistical features such as repetitive geometric textures. Building on this property, we propose a new approach for surface super‐resolution that detects repetitive patterns or self‐similarities and exploits them to improve the scan resolution by aggregating scattered measures. In contrast with other surface super‐resolution methods, our algorithm has two important advantages. First, when handling multiple scans, it does not rely on surface registration. Second, it is able to produce super‐resolution from even a single scan. These features are made possible by a new local shape description able to capture differential properties of order above 2. By comparing those descriptors, similarities are detected and used to generate a high‐resolution surface. Our results show a clear resolution gain over state‐of‐the‐art interpolation methods. Three‐dimensional scanners provide a virtual representation of object surfaces at some given precision that depends on many factors such as the object material, the quality of the laser ray or the resolution of the camera. This precision may even vary over the surface, depending, for example, on the distance to the scanner which results in uneven and unstructured point sets, with an uncertainty on the coordinates. To enhance the quality of the scanner output, one usually resorts to local surface interpolation between measured points. However, object surfaces often exhibit interesting statistical features such as repetitive geometric textures. Building on this property, we propose a new approach for surface super‐resolution that detects repetitive patterns or self‐similarities and exploits them to improve the scan resolution by aggregating scattered measures.
  • Item
    Easy Generation of Facial Animation Using Motion Graphs
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Serra, J.; Cetinaslan, O.; Ravikumar, S.; Orvalho, V.; Cosker, D.; Chen, Min and Benes, Bedrich
    Facial animation is a time‐consuming and cumbersome task that requires years of experience and/or a complex and expensive set‐up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video‐games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. The common poses are identified using a Euclidean‐based similarity metric and merged into the same node. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph using Dijkstra's algorithm, and coherent noise is introduced by swapping some path nodes with their neighbours. Expression labels, extracted from the database, provide the control mechanism for animation. We present a way of creating facial animation with reduced input that automatically controls timing and pose detail. Our technique easily fits within video‐game and crowd animation contexts, allowing the characters to be more expressive with less effort. Furthermore, it provides a starting point for content creators aiming to bring more life into their characters.Facial animation is a time‐consuming and cumbersome task that requires years of experience and/or a complex and expensive set‐up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video‐games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph, with coherent noise introduced by varying the optimal path that connects the desired nodes. Expression labels, extracted from the database, provide an intuitive control mechanism for animation. Our technique easily fits within video‐game and crowd animation contexts, allowing the characters to be more expressive with less effort.
  • Item
    Data Abstraction for Visualizing Large Time Series
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Shurkhovetskyy, G.; Andrienko, N.; Andrienko, G.; Fuchs, G.; Chen, Min and Benes, Bedrich
    Numeric time series is a class of data consisting of chronologically ordered observations represented by numeric values. Much of the data in various domains, such as financial, medical and scientific, are represented in the form of time series. To cope with the increasing sizes of datasets, numerous approaches for abstracting large temporal data are developed in the area of data mining. Many of them proved to be useful for time series visualization. However, despite the existence of numerous surveys on time series mining and visualization, there is no comprehensive classification of the existing methods based on the needs of visualization designers. We propose a classification framework that defines essential criteria for selecting an abstraction method with an eye to subsequent visualization and support of users' analysis tasks. We show that approaches developed in the data mining field are capable of creating representations that are useful for visualizing time series data. We evaluate these methods in terms of the defined criteria and provide a summary table that can be easily used for selecting suitable abstraction methods depending on data properties, desirable form of representation, behaviour features to be studied, required accuracy and level of detail, and the necessity of efficient search and querying. We also indicate directions for possible extension of the proposed classification framework.Numeric time series is a class of data consisting of chronologically ordered observations represented by numeric values. Much of the data in various domains, such as financial, medical and scientific, are represented in the form of time series. To cope with the increasing sizes of datasets, numerous approaches for abstracting large temporal data are developed in the area of data mining. Many of them proved to be useful for time series visualization. However, despite the existence of numerous surveys on time series mining and visualization, there is no comprehensive classification of the existing methods based on the needs of visualization designers. We propose a classification framework that defines essential criteria for selecting an abstraction method with an eye to subsequent visualization and support of users' analysis tasks. We show that approaches developed in the data mining field are capable of creating representations that are useful for visualizing time series data.
  • Item
    Peridynamics‐Based Fracture Animation for Elastoplastic Solids
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Chen, Wei; Zhu, Fei; Zhao, Jing; Li, Sheng; Wang, Guoping; Chen, Min and Benes, Bedrich
    In this paper, we exploit the use of peridynamics theory for graphical animation of material deformation and fracture. We present a new meshless framework for elastoplastic constitutive modelling that contrasts with previous approaches in graphics. Our peridynamics‐based elastoplasticity model represents deformation behaviours of materials with high realism. We validate the model by varying the material properties and performing comparisons with finite element method (FEM) simulations. The integral‐based nature of peridynamics makes it trivial to model material discontinuities, which outweighs differential‐based methods in both accuracy and ease of implementation. We propose a simple strategy to model fracture in the setting of peridynamics discretization. We demonstrate that the fracture criterion combined with our elastoplasticity model could realistically produce ductile fracture as well as brittle fracture. Our work is the first application of peridynamics in graphics that could create a wide range of material phenomena including elasticity, plasticity, and fracture. The complete framework provides an attractive alternative to existing methods for producing modern visual effects.In this paper, we exploit the use of peridynamics theory for graphical animation of material deformation and fracture. We present a new meshless framework for elastoplastic constitutive modelling that contrasts with previous approaches in graphics. Our peridynamics‐based elastoplasticity model represents deformation behaviours of materials with high realism. We validate the model by varying the material properties and performing comparisons with finite element method (FEM) simulations. The integral‐based nature of peridynamics makes it trivial to model material discontinuities, which outweighs differentialbased methods in both accuracy and ease of implementation.
  • Item
    Enhanced Visualization of Detected 3D Geometric Differences
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Palma, Gianpaolo; Sabbadin, Manuele; Corsini, Massimiliano; Cignoni, Paolo; Chen, Min and Benes, Bedrich
    The wide availability of 3D acquisition devices makes viable their use for shape monitoring. The current techniques for the analysis of time‐varying data can efficiently detect actual significant geometric changes and rule out differences due to irrelevant variations (such as sampling, lighting and coverage). On the other hand, the effective visualization of such detected changes can be challenging when we want to show at the same time the original appearance of the 3D model. In this paper, we propose a dynamic technique for the effective visualization of detected differences between two 3D scenes. The presented approach, while retaining the original appearance, allows the user to switch between the two models in a way that enhances the geometric differences that have been detected as significant. Additionally, the same technique is able to visually hides the other negligible, yet visible, variations. The main idea is to use two distinct screen space time‐based interpolation functions for the significant 3D differences and for the small variations to hide. We have validated the proposed approach in a user study on a different class of datasets, proving the objective and subjective effectiveness of the method.The wide availability of 3D acquisition devices makes viable their use for shape monitoring. The current techniques for the analysis of time‐varying data can efficiently detect actual significant geometric changes and rule out differences due to irrelevant variations (such as sampling, lighting and coverage). On the other hand, the effective visualization of such detected changes can be challenging when we want to show at the same time the original appearance of the 3D model. In this paper, we propose a dynamic technique for the effective visualization of detected differences between two 3D scenes.
  • Item
    On the Stability of Functional Maps and Shape Difference Operators
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Huang, R.; Chazal, F.; Ovsjanikov, M.; Chen, Min and Benes, Bedrich
    In this paper, we provide stability guarantees for two frameworks that are based on the notion of functional maps—the framework of shape difference operators and the one of analyzing and visualizing the deformations between shapes. We consider two types of perturbations in our analysis: one is on the input shapes and the other is on the change in . In theory, we formulate and justify the robustness that has been observed in practical implementations of those frameworks. Inspired by our theoretical results, we propose a pipeline for constructing shape difference operators on point clouds and show numerically that the results are robust and informative. In particular, we show that both the shape difference operators and the derived areas of highest distortion are stable with respect to changes in shape representation and change of scale. Remarkably, this is in contrast with the well‐known instability of the eigenfunctions of the Laplace–Beltrami operator computed on point clouds compared to those obtained on triangle meshes.In this paper, we provide stability guarantees for two frameworks that are based on the notion of functional maps—the shape difference operators introduced in [ROA*13] and the framework of [OBCCG13] which is used to analyse and visualize the deformations between shapes induced by a functional map. We consider two types of perturbations in our analysis: one is on the input shapes and the other is on the change in . In theory, we formulate and justify the robustness that has been observed in practical implementations of those frameworks.
  • Item
    Audiovisual Resource Allocation for Bimodal Virtual Environments
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Doukakis, E.; Debattista, K.; Harvey, C.; Bashford‐Rogers, T.; Chalmers, A.; Chen, Min and Benes, Bedrich
    Fidelity is of key importance if virtual environments are to be used as authentic representations of real environments. However, simulating the multitude of senses that comprise the human sensory system is computationally challenging. With limited computational resources, it is essential to distribute these carefully in order to simulate the most ideal perceptual experience. This paper investigates this balance of resources across multiple scenarios where combined audiovisual stimulation is delivered to the user. A subjective experiment was undertaken where participants (N=35) allocated five fixed resource budgets across graphics and acoustic stimuli. In the experiment, increasing the quality of one of the stimuli decreased the quality of the other. Findings demonstrate that participants allocate more resources to graphics; however, as the computational budget is increased, an approximately balanced distribution of resources is preferred between graphics and acoustics. Based on the results, an audiovisual quality prediction model is proposed and successfully validated against previously untested budgets and an untested scenario.Fidelity is of key importance if virtual environments are to be used as authentic representations of real environments. However, simulating the multitude of senses that comprise the human sensory system is computationally challenging. With limited computational resources, it is essential to distribute these carefully in order to simulate the most ideal perceptual experience. This paper investigates this balance of resources across multiple scenarios where combined audiovisual stimulation is delivered to the user. A subjective experiment was undertaken where participants (N=35) allocated five fixed resource budgets across graphics and acoustic stimuli.
  • Item
    CLUST: Simulating Realistic Crowd Behaviour by Mining Pattern from Crowd Videos
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Zhao, M.; Cai, W.; Turner, S. J.; Chen, Min and Benes, Bedrich
    In this paper, we present a data‐driven approach to simulate realistic locomotion of virtual pedestrians. We focus on simulating low‐level pedestrians' motion, where a pedestrian's motion is mainly affected by other pedestrians and static obstacles nearby, and the preferred velocities of agents (direction and speed) are obtained from higher level path planning models. Before the simulation, collision avoidance processes (i.e. examples) are extracted from videos to describe how pedestrians avoid collisions, which are then clustered using hierarchical clustering algorithm with a novel distance function to find similar patterns of pedestrians' collision avoidance behaviours. During the simulation, at each time step, the perceived state of each agent is classified into one cluster using a neural network trained before the simulation. A sequence of velocity vectors, representing the agent's future motion, is selected among the examples corresponding to the chosen cluster. The proposed CLUST model is trained and applied to different real‐world datasets to evaluate its generality and effectiveness both qualitatively and quantitatively. The simulation results demonstrate that the proposed model can generate realistic crowd behaviours with comparable computational cost.In this paper, we present a data‐driven approach to simulate realistic locomotion of virtual pedestrians. We focus on simulating low‐level pedestrians' motion, where a pedestrian's motion is mainly affected by other pedestrians and static obstacles nearby, and the preferred velocities of agents (direction and speed) are obtained from higher level path planning models. Before the simulation, collision avoidance processes (i.e. examples) are extracted from videos to describe how pedestrians avoid collisions, which are then clustered using hierarchical clustering algorithm with a novel distance function to find similar patterns of pedestrians' collision avoidance behaviours. During the simulation, at each time step, the perceived state of each agent is classified into one cluster using a neural network trained before the simulation. A sequence of velocity vectors, representing the agent's future motion, is selected among the examples corresponding to the chosen cluster.
  • Item
    Realistic Ultrasound Simulation of Complex Surface Models Using Interactive Monte‐Carlo Path Tracing
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Mattausch, Oliver; Makhinya, Maxim; Goksel, Orcun; Chen, Min and Benes, Bedrich
    Ray‐based simulations have been shown to generate impressively realistic ultrasound images in interactive frame rates. Recent efforts used GPU‐based surface raytracing to simulate complex ultrasound interactions such as multiple reflections and refractions. These methods are restricted to perfectly specular reflections (i.e. following only a single reflective/refractive ray), whereas real tissue exhibits roughness of varying degree at tissue interfaces, causing partly diffuse reflections and refractions. Such surface interactions are significantly more complex and can in general not be handled by conventional deterministic raytracing approaches. However, these can be efficiently computed by Monte‐Carlo sampling techniques, where many ray paths are generated with respect to a probability distribution. In this paper, we introduce Monte‐Carlo raytracing for ultrasound simulation. This enables the realistic simulation of ultrasound‐tissue interactions such as soft shadows and fuzzy reflections. We discuss how to properly weight the contribution of each ray path in order to simulate the behaviour of a beamformed ultrasound signal. Tracing many individual rays per transducer element is easily parallelizable on modern GPUs, as opposed to previous approaches based on recursive binary raytracing. We further propose a significant performance optimization based on adaptive sampling.Ray‐based simulations have been shown to generate impressively realistic ultrasound images in interactive frame rates. Recent efforts used GPU‐based surface raytracing to simulate complex ultrasound interactions such as multiple reflections and refractions. These methods are restricted to perfectly specular reflections (i.e. following only a single reflective/refractive ray), whereas real tissue exhibits roughness of varying degree at tissue interfaces, causing partly diffuse reflections and refractions. Such surface interactions are significantly more complex and can in general not be handled by conventional deterministic raytracing approaches.
  • Item
    Tree Growth Modelling Constrained by Growth Equations
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Yi, Lei; Li, Hongjun; Guo, Jianwei; Deussen, Oliver; Zhang, Xiaopeng; Chen, Min and Benes, Bedrich
    Modelling and simulation of tree growth that is faithful to the living environment and numerically consistent to botanic knowledge are important topics for realistic modelling in computer graphics. The realism factors concerned include the effects of complex environment on tree growth and the reliability of the simulation in botanical research, such as horticulture and agriculture. This paper proposes a new approach, namely, integrated growth modelling, to model virtual trees and simulate their growth by enforcing constraints of environmental resources and tree morphological properties. Morphological properties are integrated into a growth equation with different parameters specified in the simulation, including its sensitivity to light, allocation and usage of received resources and effects on its environment. The growth equation guarantees that the simulation procedure numerically matches the natural growth phenomenon of trees. With this technique, the growth procedures of diverse and realistic trees can also be modelled in different environments, such as resource competition among multiple trees.Modelling and simulation of tree growth that is faithful to the living environment and numerically consistent to botanic knowledge are important topics for realistic modelling in computer graphics. The realism factors concerned include the effects of complex environment on tree growth and the reliability of the simulation in botanical research, such as horticulture and agriculture. This paper proposes a new approach, namely, integrated growth modelling, to model virtual trees and simulate their growth by enforcing constraints of environmental resources and tree morphological properties. Morphological properties are integrated into a growth equation with different parameters specified in the simulation, including its sensitivity to light, allocation and usage of received resources and effects on its environment. The growth equation guarantees that the simulation procedure numerically matches the natural growth phenomenon of trees.
  • Item
    Enhancing the Realism of Sketch and Painted Portraits With Adaptable Patches
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Lee, Yin‐Hsuan; Chang, Yu‐Kai; Chang, Yu‐Lun; Lin, I‐Chen; Wang, Yu‐Shuen; Lin, Wen‐Chieh; Chen, Min and Benes, Bedrich
    Realizing unrealistic faces is a complicated task that requires a rich imagination and comprehension of facial structures. When face matching, warping or stitching techniques are applied, existing methods are generally incapable of capturing detailed personal characteristics, are disturbed by block boundary artefacts, or require painting‐photo pairs for training. This paper presents a data‐driven framework to enhance the realism of sketch and portrait paintings based only on photo samples. It retrieves the optimal patches of adaptable shapes and numbers according to the content of the input portrait and collected photos. These patches are then seamlessly stitched by chromatic gain and offset compensation and multi‐level blending. Experiments and user evaluations show that the proposed method is able to generate realistic and novel results for a moderately sized photo collection.Realizing unrealistic faces is a complicated task that requires a rich imagination and comprehension of facial structures. When face matching, warping or stitching techniques are applied, existing methods are generally incapable of capturing detailed personal characteristics, are disturbed by block boundary artefacts, or require painting‐photo pairs for training. This paper presents a data‐driven framework to enhance the realism of sketch and portrait paintings based only on photo samples. It retrieves the optimal patches of adaptable shapes and numbers according to the content of the input portrait and collected photos. These patches are then seamlessly stitched by chromatic gain and offset compensation and multi‐level blending.
  • Item
    Guidelines for Quantitative Evaluation of Medical Visualizations on the Example of 3D Aneurysm Surface Comparisons
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Saalfeld, P.; Luz, M.; Berg, P.; Preim, B.; Saalfeld, S.; Chen, Min and Benes, Bedrich
    Medical visualizations are highly adapted to a specific medical application scenario. Therefore, many researchers conduct qualitative evaluations with a low number of physicians or medical experts to assess the benefits of their visualization technique. Although this type of research has advantages, it is difficult to reproduce and can be subjectively biased. This makes it problematic to quantify the benefits of a new visualization technique. Quantitative evaluation can objectify research and help bringing new visualization techniques into clinical practice. To support researchers, we present guidelines to quantitatively evaluate medical visualizations, considering specific characteristics and difficulties. We demonstrate the adaptation of these guidelines on the example of comparative aneurysm surface visualizations. We developed three visualization techniques to compare aneurysm volumes. The visualization techniques depict two similar, but not identical aneurysm surface meshes. In a user study with 34 participants and five aneurysm data sets, we assessed objective measures (accuracy and required time) and subjective ratings (suitability and likeability). The provided guidelines and presentation of different stages of the evaluation allow for an easy adaptation to other application areas of medical visualization.Medical visualizations are highly adapted to a specific medical application scenario. Therefore, many researchers conduct qualitative evaluations with a low number of physicians or medical experts to assess the benefits of their visualization technique. Although this type of research has advantages, it is difficult to reproduce and can be subjectively biased. This makes it problematic to quantify the benefits of a new visualization technique. Quantitative evaluation can objectify research and help bringing new visualization techniques into clinical practice.
  • Item
    A Visualization Framework and User Studies for Overloaded Orthogonal Drawings
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Didimo, Walter; Kornaropoulos, Evgenios M.; Montecchiani, Fabrizio; Tollis, Ioannis G.; Chen, Min and Benes, Bedrich
    Overloaded orthogonal drawing (OOD) is a recent graph visualization style specifically conceived for directed graphs. It merges the advantages of some popular drawing conventions like layered drawings and orthogonal drawings, and provides additional support for some common analysis tasks. We present a visualization framework called , which implements algorithms and graphical features for the OOD style. Besides the algorithm for acyclic digraphs, the DAGView framework implements extensions to visualize both digraphs with cycles and undirected graphs, with the additional possibility of taking into account user preferences and constraints. It also supports an interactive visualization of clustered digraphs, based on the use of strongly connected components. Moreover, we describe an experimental user study, aimed to investigate the usability of OOD within the DAGView framework. The results of our study suggest that OOD can be effectively exploited to perform some basic tasks of analysis in a faster and more accurate way when compared to other drawing styles for directed graphs.Overloaded orthogonal drawing (OOD) is a recent graph visualization style specifically conceived for directed graphs. It merges the advantages of some popular drawing conventions like layered drawings and orthogonal drawings, and provides additional support for some common analysis tasks. We present a visualization framework called , which implements algorithms and graphical features for the OOD style. Besides the algorithm for acyclic digraphs, the DAGView framework implements extensions to visualize both digraphs with cycles and undirected graphs, with the additional possibility of taking into account user preferences and constraints.
  • Item
    Human Factors in Streaming Data Analysis: Challenges and Opportunities for Information Visualization
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Dasgupta, Aritra; Arendt, Dustin L.; Franklin, Lyndsey R.; Wong, Pak Chung; Cook, Kristin A.; Chen, Min and Benes, Bedrich
    Real‐world systems change continuously. In domains such as traffic monitoring or cyber security, such changes occur within short time scales. This results in a streaming data problem and leads to unique challenges for the human in the loop, as analysts have to ingest and make sense of dynamic patterns in real time. While visualizations are being increasingly used by analysts to derive insights from streaming data, we lack a thorough characterization of the human‐centred design problems and a critical analysis of the state‐of‐the‐art solutions that exist for addressing these problems. In this paper, our goal is to fill this gap by studying how the state of the art in streaming data visualization handles the challenges and reflect on the gaps and opportunities. To this end, we have three contributions in this paper: (i) problem characterization for identifying domain‐specific goals and challenges for handling streaming data, (ii) a survey and analysis of the state of the art in streaming data visualization research with a focus on how visualization design meets challenges specific to change perception and (iii) reflections on the design trade‐offs, and an outline of potential research directions for addressing the gaps in the state of the art.Real‐world systems change continuously. In domains such as traffic monitoring or cyber security, such changes occur within short time scales. This results in a streaming data problem and leads to unique challenges for the human in the loop, as analysts have to ingest and make sense of dynamic patterns in real time. While visualizations are being increasingly used by analysts to derive insights from streaming data, we lack a thorough characterization of the human‐centred design problems and a critical analysis of the state‐of‐the‐art solutions that exist for addressing these problems.
  • Item
    Stereo‐Consistent Contours in Object Space
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Bukenberger, Dennis R.; Schwarz, Katharina; Lensch, Hendrik P. A.; Chen, Min and Benes, Bedrich
    Notebook scribbles, art or technical illustrations—line drawings are a simplistic method to visually communicate information. Automated line drawings often originate from virtual 3D models, but one cannot trivially experience their three‐dimensionality. This paper introduces a novel concept to produce stereo‐consistent line drawings of virtual 3D objects. Some contour lines do not only depend on an objects geometry, but also on the position of the observer. To accomplish consistency between multiple view positions, our approach exploits geometrical characteristics of 3D surfaces in object space. Established techniques for stereo‐consistent line drawings operate on rendered pixel images. In contrast, our pipeline operates in object space using vector geometry, which yields many advantages: The position of the final viewpoint(s) is flexible within a certain window even after the contour generation, e.g. a stereoscopic image pair is only one possible application. Such windows can be concatenated to simulate contours observed from an arbitrary camera path. Various types of popular contour generators can be handled equivalently, occlusions are natively supported and stylization based on geometry characteristics is also easily possible.Notebook scribbles, art or technical illustrations—line drawings are a simplistic method to visually communicate information. Automated line drawings often originate from virtual 3D models, but one cannot trivially experience their three‐dimensionality. This paper introduces a novel concept to produce stereo‐consistent line drawings of virtual 3D objects. Some contour lines do not only depend on an objects geometry, but also on the position of the observer. To accomplish consistency between multiple view positions, our approach exploits geometrical characteristics of 3D surfaces in object space. Established techniques for stereo‐consistent line drawings operate on rendered pixel images. In contrast, our pipeline operates in object space using vector geometry, which yields many advantages: The position of the final viewpoint(s) is flexible within a certain window even after the contour generation, e.g. a stereoscopic image pair is only one possible application.
  • Item
    Improved Corners with Multi‐Channel Signed Distance Fields
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Chlumský, V.; Sloup, J.; Šimeček, I.; Chen, Min and Benes, Bedrich
    We propose an extension to the state‐of‐the‐art text rendering technique based on sampling a 2D signed distance field from a texture. This extension significantly improves the visual quality of sharp corners, which is the most problematic feature to reproduce for the original technique. We achieve this by using a combination of multiple distance fields in conjunction, which together provide a more thorough representation of the given glyph's (or any other 2D shape's) geometry. This multi‐channel distance field representation is described along with its application in shader‐based rendering. The rendering process itself remains very simple and efficient, and is fully compatible with previous monochrome distance fields. The introduced method of multi‐channel distance field construction requires a vector representation of the input shape. A comparative measurement of rendering quality shows that the error in the output image can be reduced by up to several orders of magnitude.We propose an extension to the state‐of‐the‐art text rendering technique based on sampling a 2D signed distance field from a texture. This extension significantly improves the visual quality of sharp corners, which is the most problematic feature to reproduce for the original technique. We achieve this by using a combination of multiple distance fields in conjunction, which together provide a more thorough representation of the given glyph's (or any other 2D shape's) geometry. This multi‐channel distance field representation is described along with its application in shader‐based rendering. The rendering process itself remains very simple and efficient, and is fully compatible with previous monochrome distance fields.
  • Item
    Uniformization and Density Adaptation for Point Cloud Data Via Graph Laplacian
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Luo, Chuanjiang; Ge, Xiaoyin; Wang, Yusu; Chen, Min and Benes, Bedrich
    Point cloud data is one of the most common types of input for geometric processing applications. In this paper, we study the point cloud density adaptation problem that underlies many pre‐processing tasks of points data. Specifically, given a (sparse) set of points sampling an unknown surface and a target density function, the goal is to adapt to match the target distribution. We propose a simple and robust framework that is effective at achieving both local uniformity and precise global density distribution control. Our approach relies on the Gaussian‐weighted graph Laplacian and works purely in the points setting. While it is well known that graph Laplacian is related to mean‐curvature flow and thus has denoising ability, our algorithm uses certain information encoded in the graph Laplacian that is orthogonal to the mean‐curvature flow. Furthermore, by leveraging the natural scale parameter contained in the Gaussian kernel and combining it with a simulated annealing idea, our algorithm moves points in a multi‐scale manner. The resulting algorithm relies much less on the input points to have a good initial distribution (neither uniform nor close to the target density distribution) than many previous refinement‐based methods. We demonstrate the simplicity and effectiveness of our algorithm with point clouds sampled from different underlying surfaces with various geometric and topological properties.Point cloud data is one of the most common types of input for geometric processing applications. In this paper, we study the point cloud density adaptation problem that underlies many pre‐processing tasks of points data. Specifically, given a (sparse) set of points sampling an unknown surface and a target density function, the goal is to adapt to match the target distribution. We propose a simple and robust framework that is effective at achieving both local uniformity and precise global density distribution control. Our approach relies on the Gaussian‐weighted graph Laplacian and works purely in the points setting. While it is well known that graph Laplacian is related to mean‐curvature flow and thus has denoising ability, our algorithm uses certain information encoded in the graph Laplacian that is orthogonal to the mean‐curvature flow. Furthermore, by leveraging the natural scale parameter contained in the Gaussian kernel and combining it with a simulated annealing idea, our algorithm moves points in a multi‐scale manner.
  • Item
    An Efficient Hybrid Incompressible SPH Solver with Interface Handling for Boundary Conditions
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Takahashi, Tetsuya; Dobashi, Yoshinori; Nishita, Tomoyuki; Lin, Ming C.; Chen, Min and Benes, Bedrich
    We propose a hybrid smoothed particle hydrodynamics solver for efficientlysimulating incompressible fluids using an interface handling method for boundary conditions in the pressure Poisson equation. We blend particle density computed with one smooth and one spiky kernel to improve the robustness against both fluid–fluid and fluid–solid collisions. To further improve the robustness and efficiency, we present a new interface handling method consisting of two components: free surface handling for Dirichlet boundary conditions and solid boundary handling for Neumann boundary conditions. Our free surface handling appropriately determines particles for Dirichlet boundary conditions using Jacobi‐based pressure prediction while our solid boundary handling introduces a new term to ensure the solvability of the linear system. We demonstrate that our method outperforms the state‐of‐the‐art particle‐based fluid solvers.We propose a hybrid smoothed particle hydrodynamics solver for efficiently simulating incompressible fluids using an interface handling method for boundary conditions in the pressure Poisson equation. We blend particle density computed with one smooth and one spiky kernel to improve the robustness against both fluid–fluid and fluid–solid collisions.To further improve the robustness and efficiency, we present a new interface handling method consisting of two components: free surface handling for Dirichlet boundary conditions and solid boundary handling for Neumann boundary conditions.
  • Item
    Large‐Scale Pixel‐Precise Deferred Vector Maps
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Thöny, Matthias; Billeter, Markus; Pajarola, Renato; Chen, Min and Benes, Bedrich
    Rendering vector maps is a key challenge for high‐quality geographic visualization systems. In this paper, we present a novel approach to visualize vector maps over detailed terrain models in a pixel‐precise way. Our method proposes a deferred line rendering technique to display vector maps directly in a screen‐space shading stage over the 3D terrain visualization. Due to the absence of traditional geometric polygonal rendering, our algorithm is able to outperform conventional vector map rendering algorithms for geographic information systems, and supports advanced line anti‐aliasing as well as slope distortion correction. Furthermore, our deferred line rendering enables interactively customizable advanced vector styling methods as well as a tool for interactive pixel‐based editing operations.Rendering vector maps is a key challenge for high‐quality geographic visualization systems. In this paper, we present a novel approach to visualize vector maps over detailed terrain models in a pixel‐precise way. Our method proposes a deferred line rendering technique to display vector maps directly in a screen‐space shading stage over the 3D terrain visualization. Due to the absence of traditional geometric polygonal rendering, our algorithm is able to outperform conventional vector map rendering algorithms for geographic information systems, and supports advanced line anti‐aliasing as well as slope distortion correction. Furthermore, our deferred line rendering enables interactively customizable advanced vector styling methods as well as a tool for interactive pixel‐based editing operations.
  • Item
    Olfaction and Selective Rendering
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Harvey, Carlo; Bashford‐Rogers, Thomas; Debattista, Kurt; Doukakis, Efstratios; Chalmers, Alan; Chen, Min and Benes, Bedrich
    Accurate simulation of all the senses in virtual environments is a computationally expensive task. Visual saliency models have been used to improve computational performance for rendered content, but this is insufficient for multi‐modal environments. This paper considers cross‐modal perception and, in particular, if and how olfaction affects visual attention. Two experiments are presented in this paper. Firstly, eye tracking is gathered from a number of participants to gain an impression about where and how they view virtual objects when smell is introduced compared to an odourless condition. Based on the results of this experiment a new type of saliency map in a selective‐rendering pipeline is presented. A second experiment validates this approach, and demonstrates that participants rank images as better quality, when compared to a reference, for the same rendering budget.Accurate simulation of all the senses in virtual environments is a computationally expensive task. Visual saliency models have been used to improve computational performance for rendered content, but this is insufficient for multi‐modal environments. This paper considers cross‐modal perception and, in particular, if and how olfaction affects visual attention. Two experiments are presented in this paper. Firstly, eye tracking is gathered from a number of participants to gain an impression about where and how they view virtual objects when smell is introduced compared to an odourless condition.
  • Item
    Frame Rate vs Resolution: A Subjective Evaluation of Spatiotemporal Perceived Quality Under Varying Computational Budgets
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Debattista, K.; Bugeja, K.; Spina, S.; Bashford‐Rogers, T.; Hulusic, V.; Chen, Min and Benes, Bedrich
    Maximizing performance for rendered content requires making compromises on quality parameters depending on the computational resources available . Yet, it is currently unclear which parameters best maximize perceived quality. This work investigates perceived quality across computational budgets for the primary spatiotemporal parameters of resolution and frame rate. Three experiments are conducted. Experiment 1 (n = 26) shows that participants prefer fixed frame rates of 60 frames per second (fps) at lower resolutions over 30 fps at higher resolutions. Experiment 2 (n = 24) explores the relationship further with more budgets and quality settings and again finds 60 fps is generally preferred even when more resources are available. Experiment 3 (n = 25) permits the use of adaptive frame rates, and analyses the resource allocation across seven budgets. Results show that while participants allocate more resources to frame rate at lower budgets the situation reverses once higher budgets are available and a frame rate of around 40 fps is achieved. In the overall, the results demonstrate a complex relationship between frame rate and resolution's effects on perceived quality. This relationship can be harnessed, via the results and models presented, to obtain more cost‐effective virtual experiences.Maximizing performance for rendered content requires making compromises on quality parameters depending on the computational resources available. Yet, it is currently unclear which parameters best maximize perceived quality. This work investigates perceived quality across computational budgets for the primary spatiotemporal parameters of resolution and frame rate. Three experiments are conducted. Experiment 1 (n = 26) shows that participants prefer fixed frame rates of 60 frames per second (fps) at lower resolutions over 30 fps at higher resolutions. Experiment 2 (n = 24) explores the relationship further with more budgets and quality settings and again finds 60 fps is generally preferred even when more resources are available. Experiment 3 (n = 25) permits the use of adaptive frame rates, and analyses the resource allocation across seven budgets. Results show that while participants allocate more resources to frame rate at lower budgets the situation reverses once higher budgets are available and a frame rate of around 40 fps is achieved.
  • Item
    ProactiveCrowd: Modelling Proactive Steering Behaviours for Agent‐Based Crowd Simulation
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Luo, Linbo; Chai, Cheng; Ma, Jianfeng; Zhou, Suiping; Cai, Wentong; Chen, Min and Benes, Bedrich
    How to realistically model an agent's steering behaviour is a critical issue in agent‐based crowd simulation. In this work, we investigate some proactive steering strategies for agents to minimize potential collisions. To this end, a behaviour‐based modelling framework is first introduced to model the process of how humans select and execute a proactive steering strategy in crowded situations and execute the corresponding behaviour accordingly. We then propose behaviour models for two inter‐related proactive steering behaviours, namely gap seeking and following. These behaviours can be frequently observed in real‐life scenarios, and they can easily affect overall crowd dynamics. We validate our work by evaluating the simulation results of our model with the real‐world data and comparing the performance of our model with that of two state‐of‐the‐art crowd models. The results show that the performance of our model is better or at least comparable to the compared models in terms of the realism at both individual and crowd levels.How to realistically model an agent's steering behaviour is a critical issue in agent‐based crowd simulation. In this work, we investigate some proactive steering strategies for agents to minimize potential collisions. To this end, a behaviour‐based modelling framework is first introduced to model the process of how humans select and execute a proactive steering strategy in crowded situations and execute the corresponding behaviour accordingly. We then propose behaviour models for two inter‐related proactive steering behaviours, namely gap seeking and following. These behaviours can be frequently observed in real‐life scenarios, and they can easily affect overall crowd dynamics.
  • Item
    Interactive Large‐Scale Procedural Forest Construction and Visualization Based on Particle Flow Simulation
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Kohek, Štefan; Strnad, Damjan; Chen, Min and Benes, Bedrich
    Interactive visualization of large forest scenes is challenging due to the large amount of geometric detail that needs to be generated and stored, particularly in scenarios with a moving observer such as forest walkthroughs or overflights. Here, we present a new method for large‐scale procedural forest generation and visualization at interactive rates. We propose a hybrid approach by combining geometry‐based and volumetric modelling techniques with gradually transitioning level of detail (LOD). Nearer trees are constructed using an extended particle flow algorithm, in which particle trails outline the tree ramification in an inverse direction, i.e. from the leaves towards the roots. Reduced geometric representation of a tree is obtained by subsampling the trails. For distant trees, a new volumetric rendering technique in pixel‐space is introduced, which avoids geometry formation altogether and enables visualization of vast forest areas with millions of unique trees. We demonstrate that a GPU‐based implementation of the proposed method provides interactive frame rates in forest overflight scenarios, where new trees are constructed and their LOD adjusted on the fly.Interactive visualization of large forest scenes is challenging due to the large amount of geometric detail that needs to be generated and stored, particularly in scenarios with a moving observer such as forest walkthroughs or overflights. Here, we present a new method for large‐scale procedural forest generation and visualization at interactive rates. We propose a hybrid approach by combining geometry‐based and volumetric modelling techniques with gradually transitioning level of detail (LOD). Nearer trees are constructed using an extended particle flow algorithm, in which particle trails outline the tree ramification in an inverse direction, i.e. from the leaves towards the roots. Reduced geometric representation of a tree is obtained by subsampling the trails. For distant trees, a new volumetric rendering technique in pixel‐space is introduced, which avoids geometry formation altogether and enables visualization of vast forest areas with millions of unique trees.
  • Item
    A Survey on Multimodal Medical Data Visualization
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Lawonn, K.; Smit, N.N.; Bühler, K.; Preim, B.; Chen, Min and Benes, Bedrich
    Multi‐modal data of the complex human anatomy contain a wealth of information. To visualize and explore such data, techniques for emphasizing important structures and controlling visibility are essential. Such fused overview visualizations guide physicians to suspicious regions to be analysed in detail, e.g. with slice‐based viewing. We give an overview of state of the art in multi‐modal medical data visualization techniques. Multi‐modal medical data consist of multiple scans of the same subject using various acquisition methods, often combining multiple complimentary types of information. Three‐dimensional visualization techniques for multi‐modal medical data can be used in diagnosis, treatment planning, doctor–patient communication as well as interdisciplinary communication. Over the years, multiple techniques have been developed in order to cope with the various associated challenges and present the relevant information from multiple sources in an insightful way. We present an overview of these techniques and analyse the specific challenges that arise in multi‐modal data visualization and how recent works aimed to solve these, often using smart visibility techniques. We provide a taxonomy of these multi‐modal visualization applications based on the modalities used and the visualization techniques employed. Additionally, we identify unsolved problems as potential future research directions.Multi‐modal data of the complex human anatomy contain a wealth of information. To visualize and explore such data, techniques for emphasizing important structures and controlling visibility are essential. Such fused overview visualizations guide physicians to suspicious regions to be analysed in detail, e.g. with slice‐based viewing. We give an overview of state of the art in multi‐modal medical data visualization techniques. Multi‐modal medical data consist of multiple scans of the same subject using various acquisition methods, often combining multiple complimentary types of information. Three‐dimensional visualization techniques for multi‐modal medical data can be used in diagnosis, treatment planning, doctor–patient communication as well as interdisciplinary communication.
  • Item
    Distinctive Approaches to Computer Graphics Education
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Santos, B. Sousa; Dischler, J.‐M.; Adzhiev, V.; Anderson, E.F.; Ferko, A.; Fryazinov, O.; Ilčík, M.; Ilčíková, I.; Slavik, P.; Sundstedt, V.; Svobodova, L.; Wimmer, M.; Zara, J.; Chen, Min and Benes, Bedrich
    This paper presents the latest advances and research in Computer Graphics education in a nutshell. It is concerned with topics that were presented at the Education Track of the Eurographics Conference held in Lisbon in 2016. We describe works corresponding to approaches to Computer Graphics education that are unconventional in some way and attempt to tackle unsolved problems and challenges regarding the role of arts in computer graphics education, the role of research‐oriented activities in undergraduate education and the interaction among different areas of Computer Graphics, as well as their application to courses or extra‐curricular activities. We present related works addressing these topics and report experiences, successes and issues in implementing the approaches.This paper presents the latest advances and research in Computer Graphics education in a nutshell. It is concerned with topics that were presented at the Education Track of the Eurographics Conference held in Lisbon in 2016. We describe works corresponding to approaches to Computer Graphics education that are unconventional in some way and attempt to tackle unsolved problems and challenges regarding the role of arts in computer graphics education, the role of research‐oriented activities in undergraduate education and the interaction among different areas of Computer Graphics, as well as their application to courses or extra‐curricular activities.
  • Item
    Application‐Specific Tone Mapping Via Genetic Programming
    (© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Debattista, K.; Chen, Min and Benes, Bedrich
    High dynamic range (HDR) imagery permits the manipulation of real‐world data distinct from the limitations of the traditional, low dynamic range (LDR), content. The process of retargeting HDR content to traditional LDR imagery via tone mapping operators (TMOs) is useful for visualizing HDR content on traditional displays, supporting backwards‐compatible HDR compression and, more recently, is being frequently used for input into a wide variety of computer vision applications. This work presents the automatic generation of TMOs for specific applications via the evolutionary computing method of genetic programming (GP). A straightforward, generic GP method that generates TMOs for a given fitness function and HDR content is presented. Its efficacy is demonstrated in the context of three applications: Visualization of HDR content on LDR displays, feature mapping and compression. For these applications, results show good performance for the generated TMOs when compared to traditional methods. Furthermore, they demonstrate that the method is generalizable and could be used across various applications that require TMOs but for which dedicated successful TMOs have not yet been discovered. High dynamic range (HDR) imagery permits the manipulation of real‐world data distinct from the limitations of the traditional, low dynamic range (LDR), content. The process of retargeting HDR content to traditional LDR imagery via tone mapping operators (TMOs) is useful for visualizing HDR content on traditional displays, supporting backwards‐compatible HDR compression and, more recently, is being frequently used for input into a wide variety of computer vision applications. This work presents the automatic generation of TMOs for specific applications via the evolutionary computing method of genetic programming (GP).