38-Issue 6

Permanent URI for this collection

Issue Information

Issue Information

Articles

Automatic Generation of Vivid LEGO Architectural Sculptures

Zhou, J.
Chen, X.
Xu, Y.
Articles

Progressive Transient Photon Beams

Marco, Julio
Guillén, Ibón
Jarosz, Wojciech
Gutierrez, Diego
Jarabo, Adrian
Articles

LSMAT Least Squares Medial Axis Transform

Rebain, Daniel
Angles, Baptiste
Valentin, Julien
Vining, Nicholas
Peethambaran, Jiju
Izadi, Shahram
Tagliasacchi, Andrea
Articles

Appearance Modelling of Living Human Tissues

Nunes, Augusto L.P.
Maciel, Anderson
Meyer, Gary W.
John, Nigel W.
Baranoski, Gladimir V.G.
Walter, Marcelo
Articles

Skiing Simulation Based on Skill‐Guided Motion Planning

Hu, Chen‐Hui
Lee, Chien‐Ying
Liou, Yen‐Ting
Sung, Feng‐Yu
Lin, Wen‐Chieh
Articles

Efficient Computation of Smoothed Exponential Maps

Herholz, Philipp
Alexa, Marc
Articles

Markerless Multiview Motion Capture with 3D Shape Model Adaptation

Fechteler, P.
Hilsmann, A.
Eisert, P.
Articles

LinesLab: A Flexible Low‐Cost Approach for the Generation of Physical Monochrome Art

Stoppel, S.
Bruckner, S.
Articles

The State of the Art in Multilayer Network Visualization

McGee, F.
Ghoniem, M.
Melançon, G.
Otjacques, B.
Pinaud, B.
Articles

User‐Guided Facial Animation through an Evolutionary Interface

Reed, K.
Cosker, D.
Articles

Cuttlefish: Color Mapping for Dynamic Multi‐Scale Visualizations

Waldin, N.
Waldner, M.
Le Muzic, M.
Gröller, E.
Goodsell, D. S.
Autin, L.
Olson, A. J.
Viola, I.


BibTeX (38-Issue 6)
                
@article{
10.1111:cgf.13460,
journal = {Computer Graphics Forum}, title = {{
Issue Information}},
author = {}, year = {
2019},
publisher = {
© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13460}
}
                
@article{
10.1111:cgf.13603,
journal = {Computer Graphics Forum}, title = {{
Automatic Generation of Vivid LEGO Architectural Sculptures}},
author = {
Zhou, J.
and
Chen, X.
and
Xu, Y.
}, year = {
2019},
publisher = {
© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13603}
}
                
@article{
10.1111:cgf.13600,
journal = {Computer Graphics Forum}, title = {{
Progressive Transient Photon Beams}},
author = {
Marco, Julio
and
Guillén, Ibón
and
Jarosz, Wojciech
and
Gutierrez, Diego
and
Jarabo, Adrian
}, year = {
2019},
publisher = {
© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13600}
}
                
@article{
10.1111:cgf.13599,
journal = {Computer Graphics Forum}, title = {{
LSMAT Least Squares Medial Axis Transform}},
author = {
Rebain, Daniel
and
Angles, Baptiste
and
Valentin, Julien
and
Vining, Nicholas
and
Peethambaran, Jiju
and
Izadi, Shahram
and
Tagliasacchi, Andrea
}, year = {
2019},
publisher = {
© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13599}
}
                
@article{
10.1111:cgf.13604,
journal = {Computer Graphics Forum}, title = {{
Appearance Modelling of Living Human Tissues}},
author = {
Nunes, Augusto L.P.
and
Maciel, Anderson
and
Meyer, Gary W.
and
John, Nigel W.
and
Baranoski, Gladimir V.G.
and
Walter, Marcelo
}, year = {
2019},
publisher = {
© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13604}
}
                
@article{
10.1111:cgf.13606,
journal = {Computer Graphics Forum}, title = {{
Skiing Simulation Based on Skill‐Guided Motion Planning}},
author = {
Hu, Chen‐Hui
and
Lee, Chien‐Ying
and
Liou, Yen‐Ting
and
Sung, Feng‐Yu
and
Lin, Wen‐Chieh
}, year = {
2019},
publisher = {
© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13606}
}
                
@article{
10.1111:cgf.13607,
journal = {Computer Graphics Forum}, title = {{
Efficient Computation of Smoothed Exponential Maps}},
author = {
Herholz, Philipp
and
Alexa, Marc
}, year = {
2019},
publisher = {
© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13607}
}
                
@article{
10.1111:cgf.13608,
journal = {Computer Graphics Forum}, title = {{
Markerless Multiview Motion Capture with 3D Shape Model Adaptation}},
author = {
Fechteler, P.
and
Hilsmann, A.
and
Eisert, P.
}, year = {
2019},
publisher = {
© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13608}
}
                
@article{
10.1111:cgf.13609,
journal = {Computer Graphics Forum}, title = {{
LinesLab: A Flexible Low‐Cost Approach for the Generation of Physical Monochrome Art}},
author = {
Stoppel, S.
and
Bruckner, S.
}, year = {
2019},
publisher = {
© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13609}
}
                
@article{
10.1111:cgf.13610,
journal = {Computer Graphics Forum}, title = {{
The State of the Art in Multilayer Network Visualization}},
author = {
McGee, F.
and
Ghoniem, M.
and
Melançon, G.
and
Otjacques, B.
and
Pinaud, B.
}, year = {
2019},
publisher = {
© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13610}
}
                
@article{
10.1111:cgf.13612,
journal = {Computer Graphics Forum}, title = {{
User‐Guided Facial Animation through an Evolutionary Interface}},
author = {
Reed, K.
and
Cosker, D.
}, year = {
2019},
publisher = {
© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13612}
}
                
@article{
10.1111:cgf.13611,
journal = {Computer Graphics Forum}, title = {{
Cuttlefish: Color Mapping for Dynamic Multi‐Scale Visualizations}},
author = {
Waldin, N.
and
Waldner, M.
and
Le Muzic, M.
and
Gröller, E.
and
Goodsell, D. S.
and
Autin, L.
and
Olson, A. J.
and
Viola, I.
}, year = {
2019},
publisher = {
© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13611}
}

Browse

Recent Submissions

Now showing 1 - 12 of 12
  • Item
    Issue Information
    (© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2019) Chen, Min and Benes, Bedrich
  • Item
    Automatic Generation of Vivid LEGO Architectural Sculptures
    (© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2019) Zhou, J.; Chen, X.; Xu, Y.; Chen, Min and Benes, Bedrich
    Brick elements are very popular and have been widely used in many areas, such as toy design and architectural fields. Designing a vivid brick sculpture to represent a three‐dimensional (3D) model is a very challenging task, which requires professional skills and experience to convey unique visual characteristics. We introduce an automatic system to convert an architectural model into a LEGO sculpture while preserving the original model's shape features. Unlike previous legolization techniques that generate a LEGO sculpture exactly based on the input model's voxel representation, we extract the model's visual features, including repeating components, shape details and planarity. Then, we translate these visual features into the final LEGO sculpture by employing various brick types. We propose a deformation algorithm in order to resolve discrepancies between an input mesh's continuous 3D shape and the discrete positions of bricks in a LEGO sculpture. We evaluate our system on various architectural models and compare our method with previous voxelization‐based methods. The results demonstrate that our approach successfully conveys important visual features from digital models and generates vivid LEGO sculptures.Brick elements are very popular and have been widely used in many areas, such as toy design and architectural fields. Designing a vivid brick sculpture to represent a three‐dimensional (3D) model is a very challenging task, which requires professional skills and experience to convey unique visual characteristics. We introduce an automatic system to convert an architectural model (a) into a LEGO sculpture (b) while preserving the original model's shape features. Real LEGO sculptures (c) can then be built according to the automatically generated results. The results demonstrate that our approach successfully conveys important visual features from digital models and generates vivid LEGO sculptures.
  • Item
    Progressive Transient Photon Beams
    (© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2019) Marco, Julio; Guillén, Ibón; Jarosz, Wojciech; Gutierrez, Diego; Jarabo, Adrian; Chen, Min and Benes, Bedrich
    In this work, we introduce a novel algorithm for transient rendering in participating media. Our method is consistent, robust and is able to generate animations of time‐resolved light transport featuring complex caustic light paths in media. We base our method on the observation that the spatial continuity provides an increased coverage of the temporal domain, and generalize photon beams to transient‐state. We extend stead‐state photon beam radiance estimates to include the temporal domain. Then, we develop a progressive variant of our approach which provably converges to the correct solution using finite memory by averaging independent realizations of the estimates with progressively reduced kernel bandwidths. We derive the optimal convergence rates accounting for space and time kernels, and demonstrate our method against previous consistent transient rendering methods for participating media.In this work, we introduce a novel algorithm for transient rendering in participating media. Our method is consistent, robust and is able to generate animations of time‐resolved light transport featuring complex caustic light paths in media. We base our method on the observation that the spatial continuity provides an increased coverage of the temporal domain, and generalize photon beams to transient‐state. We extend stead‐state photon beam radiance estimates to include the temporal domain. Then, we develop a progressive variant of our approach which provably converges to the correct solution using finite memory by averaging independent realizations of the estimates with progressively reduced kernel bandwidths. We derive the optimal convergence rates accounting for space and time kernels, and demonstrate our method against previous consistent transient rendering methods for participating media.
  • Item
    LSMAT Least Squares Medial Axis Transform
    (© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2019) Rebain, Daniel; Angles, Baptiste; Valentin, Julien; Vining, Nicholas; Peethambaran, Jiju; Izadi, Shahram; Tagliasacchi, Andrea; Chen, Min and Benes, Bedrich
    The medial axis transform has applications in numerous fields including visualization, computer graphics, and computer vision. Unfortunately, traditional medial axis transformations are usually brittle in the presence of outliers, perturbations and/or noise along the boundary of objects. To overcome this limitation, we introduce a new formulation of the medial axis transform which is naturally robust in the presence of these artefacts. Unlike previous work which has approached the medial axis from a computational geometry angle, we consider it from a numerical optimization perspective. In this work, we follow the definition of the medial axis transform as ‘the set of maximally inscribed spheres’. We show how this definition can be formulated as a least squares relaxation where the transform is obtained by minimizing a continuous optimization problem. The proposed approach is inherently parallelizable by performing independent optimization of each sphere using Gauss–Newton, and its least‐squares form allows it to be significantly more robust compared to traditional computational geometry approaches. Extensive experiments on 2D and 3D objects demonstrate that our method provides superior results to the state of the art on both synthetic and real‐data.The medial axis transform has applications in numerous fields including visualization, computer graphics, and computer vision. Unfortunately, traditional medial axis transformations are usually brittle in the presence of outliers, perturbations and/or noise along the boundary of objects. To overcome this limitation, we introduce a new formulation of the medial axis transform which is naturally robust in the presence of these artefacts. Unlike previous work which has approached the medial axis from a computational geometry angle, we consider it from a numerical optimization perspective. In this work, we follow the definition of the medial axis transform as ‘the set of maximally inscribed spheres’.
  • Item
    Appearance Modelling of Living Human Tissues
    (© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2019) Nunes, Augusto L.P.; Maciel, Anderson; Meyer, Gary W.; John, Nigel W.; Baranoski, Gladimir V.G.; Walter, Marcelo; Chen, Min and Benes, Bedrich
    The visual fidelity of realistic renderings in Computer Graphics depends fundamentally upon how we model the appearance of objects resulting from the interaction between light and matter reaching the eye. In this paper, we survey the research addressing appearance modelling of living human tissue. Among the many classes of natural materials already researched in Computer Graphics, living human tissues such as blood and skin have recently seen an increase in attention from graphics research. There is already an incipient but substantial body of literature on this topic, but we also lack a structured review as presented here. We introduce a classification for the approaches using the four types of human tissues as classifiers. We show a growing trend of solutions that use first principles from Physics and Biology as fundamental knowledge upon which the models are built. The organic quality of visual results provided by these approaches is mainly determined by the optical properties of biophysical components interacting with light. Beyond just picture making, these models can be used in predictive simulations, with the potential for impact in many other areas.The visual fidelity of realistic renderings in Computer Graphics depends fundamentally upon how we model the appearance of objects resulting from the inter action between light and matter reaching the eye. In this paper, we survey the research addressing appearance modelling of living human tissue. Among the many classes of natural materials already researched in Computer Graphics, living human tissues such as blood and skin have recently seen an increase in attention from graphics research. There is already an incipient but substantial body of literature on this topic, but we also lack a structured review as presented here. We introduce a classification for the approaches using the four types of human tissues as classifiers. We show a growing trend of solutions that use first principles from Physics and Biology as fundamental knowledge upon which the models are built.
  • Item
    Skiing Simulation Based on Skill‐Guided Motion Planning
    (© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2019) Hu, Chen‐Hui; Lee, Chien‐Ying; Liou, Yen‐Ting; Sung, Feng‐Yu; Lin, Wen‐Chieh; Chen, Min and Benes, Bedrich
    Skiing is a popular recreational sport, and competitive skiing has been events at the Winter Olympic Games. Due to its wide moving range in the outdoor environment, motion capture of skiing is hard and usually not a good solution for generating skiing animations. Physical simulation offers a more viable alternative. However, skiing simulation is challenging as skiing involves many complicated motor skills and physics, such as balance keeping, movement coordination, articulated body dynamics and ski‐snow reaction. In particular, as no reference motions — usually from MOCAP data — are readily available for guiding the high‐level motor control, we need to synthesize plausible reference motions additionally. To solve this problem, sports techniques are exploited for reference motion planning. We propose a physics‐based framework that employs kinetic analyses of skiing techniques and the ski–snow contact model to generate realistic skiing motions. By simulating the inclination, angulation and weighting/unweighting techniques, stable and plausible carving turns and bump skiing animations can be generated. We evaluate our framework by demonstrating various skiing motions with different speeds, curvature radii and bump sizes. Our results show that employing the sports techniques used by athletes can provide considerable potential to generate agile sport motions without reference motions.Skiing is a popular recreational sport, and competitive skiing has been events at the Winter Olympic Games. Due to its wide moving range in the outdoor environment, motion capture of skiing is hard and usually not a good solution for generating skiing animations. Physical simulation offers a more viable alternative. However, skiing simulation is challenging as skiing involves many complicated motor skills and physics, such as balance keeping, movement coordination, articulated body dynamics and ski‐snow reaction. In particular, as no reference motions — usually from MOCAP data — are readily available for guiding the high‐level motor control, we need to synthesize plausible reference motions additionally. To solve this problem, sports techniques are exploited for reference motion planning. We propose a physics‐based framework that employs kinetic analyses of skiing techniques and the ski–snow contact model to generate realistic skiing motions.
  • Item
    Efficient Computation of Smoothed Exponential Maps
    (© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2019) Herholz, Philipp; Alexa, Marc; Chen, Min and Benes, Bedrich
    Many applications in geometry processing require the computation of local parameterizations on a surface mesh at interactive rates. A popular approach is to compute local exponential maps, i.e. parameterizations that preserve distance and angle to the origin of the map. We extend the computation of geodesic distance by heat diffusion to also determine angular information for the geodesic curves. This approach has two important benefits compared to fast approximate as well as exact forward tracing of the distance function: First, it allows generating smoother maps, avoiding discontinuities. Second, exploiting the factorization of the global Laplace–Beltrami operator of the mesh and using recent localized solution techniques, the computation is more efficient even compared to fast approximate solutions based on Dijkstra's algorithm.Many applications in geometry processing require the computation of local parameterizations on a surface mesh at interactive rates. A popular approach is to compute local exponential maps, i.e. parameterizations that preserve distance and angle to the origin of the map. We extend the computation of geodesic distance by heat diffusion to also determine angular information for the geodesic curves. This approach has two important benefits compared to fast approximate as well as exact forward tracing of the distance function: First, it allows generating smoother maps, avoiding discontinuities. Second, exploiting the factorization of the global Laplace–Beltrami operator of the mesh and using recent localized solution techniques, the computation is more efficient even compared to fast approximate solutions based on Dijkstra's algorithm.
  • Item
    Markerless Multiview Motion Capture with 3D Shape Model Adaptation
    (© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2019) Fechteler, P.; Hilsmann, A.; Eisert, P.; Chen, Min and Benes, Bedrich
    In this paper, we address simultaneous markerless motion and shape capture from 3D input meshes of partial views onto a moving subject. We exploit a computer graphics model based on kinematic skinning as template tracking model. This template model consists of vertices, joints and skinning weights learned a priori from registered full‐body scans, representing true human shape and kinematics‐based shape deformations. Two data‐driven priors are used together with a set of constraints and cues for setting up sufficient correspondences. A Gaussian mixture model‐based pose prior of successive joint configurations is learned to soft‐constrain the attainable pose space to plausible human poses. To make the shape adaptation robust to outliers and non‐visible surface regions and to guide the shape adaptation towards realistically appearing human shapes, we use a mesh‐Laplacian‐based shape prior. Both priors are learned/extracted from the training set of the template model learning phase. The output is a model adapted to the captured subject with respect to shape and kinematic skeleton as well as the animation parameters to resemble the observed movements. With example applications, we demonstrate the benefit of such footage. Experimental evaluations on publicly available datasets show the achieved natural appearance and accuracy.: In this paper, we address simultaneous markerless motion and shape capture from 3D input meshes of partial views onto a moving subject. We exploit a computer graphics model based on kinematic skinning as template tracking model. This template model consists of vertices, joints and skinning weights learned a priori from registered full‐body scans, representing true human shape and kinematics‐based shape deformations. Two data‐driven priors are used together with a set of constraints and cues for setting up sufficient correspondences. A Gaussian mixture model‐based pose prior of successive joint configurations is learned to soft‐constrain the attainable pose space to plausible human poses. To make the shape adaptation robust to outliers and non‐visible surface regions and to guide the shape adaptation towards realistically appearing human shapes, we use a mesh‐Laplacian‐based shape prior. Both priors are learned/extracted from the training set of the template model learning phase.
  • Item
    LinesLab: A Flexible Low‐Cost Approach for the Generation of Physical Monochrome Art
    (© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2019) Stoppel, S.; Bruckner, S.; Chen, Min and Benes, Bedrich
    The desire for the physical generation of computer art has seen a significant body of research that has resulted in sophisticated robots and painting machines, together with specialized algorithms mimicking particular artistic techniques. The resulting setups are often expensive and complex, making them unavailable for recreational and hobbyist use. In recent years, however, a new class of affordable low‐cost plotters and cutting machines has reached the market. In this paper, we present a novel system for the physical generation of line and cut‐out art based on digital images, targeted at such off‐the‐shelf devices. Our approach uses a meta‐optimization process to generate results that represent the tonal content of a digital image while conforming to the physical and mechanical constraints of home‐use devices. By flexibly combining basic sets of positional and shape encodings, we are able to recreate a wide range of artistic styles. Furthermore, our system optimizes the output in terms of visual perception based on the desired viewing distance, while remaining scalable with respect to the medium size.The desire for the physical generation of computer art has seen a significant body of research that has resulted in sophisticated robots and painting machines, together with specialized algorithms mimicking particular artistic techniques. The resulting setups are often expensive and complex, making them unavailable for recreational and hobbyist use. In recent years, however, a new class of affordable low‐cost plotters and cutting machines has reached the market. In this paper, we present a novel system for the physical generation of line and cut‐out art based on digital images, targeted at such off‐the‐shelf devices. Our approach uses a meta‐optimization process to generate results that represent the tonal content of a digital image while conforming to the physical and mechanical constraints of home‐use devices.
  • Item
    The State of the Art in Multilayer Network Visualization
    (© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2019) McGee, F.; Ghoniem, M.; Melançon, G.; Otjacques, B.; Pinaud, B.; Chen, Min and Benes, Bedrich
    Modelling relationship between entities in real‐world systems with a simple graph is a standard approach. However, reality is better embraced as several interdependent subsystems (or layers). Recently, the concept of a multilayer network model has emerged from the field of complex systems. This model can be applied to a wide range of real‐world data sets. Examples of multilayer networks can be found in the domains of life sciences, sociology, digital humanities and more. Within the domain of graph visualization, there are many systems which visualize data sets having many characteristics of multilayer graphs. This report provides a state of the art and a structured analysis of contemporary multilayer network visualization, not only for researchers in visualization, but also for those who aim to visualize multilayer networks in the domain of complex systems, as well as those developing systems across application domains. We have explored the visualization literature to survey visualization techniques suitable for multilayer graph visualization, as well as tools, tasks and analytic techniques from within application domains. This report also identifies the outstanding challenges for multilayer graph visualization and suggests future research directions for addressing them.Modelling relationship between entities in real‐world systems with a simple graph is a standard approach. However, reality is better embraced as several interdependent subsystems (or layers). Recently, the concept of a multilayer network model has emerged from the field of complex systems. This model can be applied to a wide range of real‐world data sets. Examples of multilayer networks can be found in the domains of life sciences, sociology, digital humanities and more. Within the domain of graph visualization, there are many systems which visualize data sets having many characteristics of multilayer graphs. This report provides a state of the art and a structured analysis of contemporary multilayer network visualization, not only for researchers in visualization, but also for those who aim to visualize multilayer networks in the domain of complex systems, as well as those developing systems across application domains.
  • Item
    User‐Guided Facial Animation through an Evolutionary Interface
    (© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2019) Reed, K.; Cosker, D.; Chen, Min and Benes, Bedrich
    We propose a design framework to assist with user‐generated content in facial animation — without requiring any animation experience or ground truth reference. Where conventional prototyping methods rely on handcrafting by experienced animators, our approach looks to encode the role of the animator as an Evolutionary Algorithm acting on animation controls, driven by visual feedback from a user. Presented as a simple interface, users sample control combinations and select favourable results to influence later sampling. Over multiple iterations of disregarding unfavourable control values, parameters converge towards the user's ideal. We demonstrate our framework through two non‐trivial applications: creating highly nuanced expressions by evolving control values of a face rig and non‐linear motion through evolving control point positions of animation curves.We propose a design framework to assist with user‐generated content in facial animation — without requiring any animation experience or ground truth reference. Where conventional prototyping methods rely on handcrafting by experienced animators, our approach looks to encode the role of the animator as an Evolutionary Algorithm acting on animation controls, driven by visual feedback from a user. Presented as a simple interface, users sample control combinations and select favourable results to influence later sampling. Over multiple iterations of disregarding unfavourable control values, parameters converge towards the user's ideal. We demonstrate our framework through two non‐trivial applications: creating highly nuanced expressions by evolving control values of a face rig and non‐linear motion through evolving control point positions of animation curves.
  • Item
    Cuttlefish: Color Mapping for Dynamic Multi‐Scale Visualizations
    (© 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2019) Waldin, N.; Waldner, M.; Le Muzic, M.; Gröller, E.; Goodsell, D. S.; Autin, L.; Olson, A. J.; Viola, I.; Chen, Min and Benes, Bedrich
    Visualizations of hierarchical data can often be explored interactively. For example, in geographic visualization, there are continents, which can be subdivided into countries, states, counties and cities. Similarly, in models of viruses or bacteria at the highest level are the compartments, and below that are macromolecules, secondary structures (such as α‐helices), amino‐acids, and on the finest level atoms. Distinguishing between items can be assisted through the use of color at all levels. However, currently, there are no hierarchical and adaptive color mapping techniques for very large multi‐scale visualizations that can be explored interactively. We present a novel, multi‐scale, color‐mapping technique for adaptively adjusting the color scheme to the current view and scale. Color is treated as a resource and is smoothly redistributed. The distribution adjusts to the scale of the currently observed detail and maximizes the color range utilization given current viewing requirements. Thus, we ensure that the user is able to distinguish items on any level, even if the color is not constant for a particular feature. The coloring technique is demonstrated for a political map and a mesoscale structural model of HIV. The technique has been tested by users with expertise in structural biology and was overall well received.Visualizations of hierarchical data can often be explored interactively. For example, in geographic visualization, there are continents, which can be subdivided into countries, states, counties and cities. Similarly, in models of viruses or bacteria at the highest level are the compartments, and below that are macromolecules, secondary structures (such as α‐helices), amino‐acids, and on the finest level atoms. Distinguishing between items can be assisted through the use of color at all levels. However, currently, there are no hierarchical and adaptive color mapping techniques for very large multi‐scale visualizations that can be explored interactively. We present a novel, multi‐scale, color‐mapping technique for adaptively adjusting the color scheme to the current view and scale. Color is treated as a resource and is smoothly redistributed. The distribution adjusts to the scale of the currently observed detail and maximizes the color range utilization given current viewing requirements. Thus, we ensure that the user is able to distinguish items on any level, even if the color is not constant for a particular feature. The coloring technique is demonstrated for a political map and a mesoscale structural model of HIV. The technique has been tested by users with expertise in structural biology and was overall well received.