Expressive 2018

Permanent URI for this collection

Victoria, British Columbia, Canada | August 17 – 19, 2018
Sketching
3D Sketching for Interactive Model Retrieval in Virtual Reality
Daniele Giunchi, Stuart James, and Anthony Steed
The Role of Grouping in Sketched Diagram Recognition
Amirhossein Ghodrati, Rachel Blagojevic, Hans W. Guesgen, Stephen Marsland, and Beryl Plimmer
Context-based Sketch Classification
Jianhui Zhang, Yilan Chen, Lei Li, Hongbo Fu, and Chiew-Lan Tai
Between 2.5D and 3D
Structuring and Layering Contour Drawings of Organic Shapes
Even Entem, Amal Dev Parakkat, Marie-Paule Cani, and Loïc Barthe
Seamless Reconstruction of Part-Based High-Relief Models from Hand-Drawn Images
Marek Dvorožnák, Saman Sepehri Nejad, Ondřej Jamriška, Alec Jacobson, Ladislav Kavan, and Daniel Sýkora
Sculpture Paintings
Sami Arpa, Sabine Süsstrunk, and Roger D. Hersch
Implicit Representation of Inscribed Volumes
Parto Sahbaei, David Mould, and Brian Wyvill
Stylization Before and Now
Abstract Depiction of Human and Animal Figures: Examples from Two Centuries of Art and Craft
Neil A. Dodgson
MNPR: A Framework for Real-Time Expressive Non-Photorealistic Rendering of 3D Computer Graphics
Santiago E. Montesdeoca, Hock Soon Seah, Amir Semmo, Pierre Bénard, Romain Vergne, Joëlle Thollot, and Davide Benvenuti
Motion-coherent stylization with screen-space image filters
Alexandre Bléron, Romain Vergne, Thomas Hurtut, and Joëlle Thollot
Reducing Affective Responses to Surgical Images through Color Manipulation and Stylization
Lonni Besançon, Amir Semmo, David Biau, Bruno Frachet, Virginie Pineau, El Hadi Sariali, Rabah Taouachi, Tobias Isenberg, and Pierre Dragicevic
Virtual Brushes
Brush Stroke Synthesis with a Generative Adversarial Network Driven by Physically Based Simulation
Rundong Wu, Zhili Chen, Zhaowen Wang, Jimei Yang, and Steve Marschner
Fluid Brush
Sarah Abraham, Etienne Vouga, and Donald Fussell
Computational Light Painting and Kinetic Photography
Yaozhun Huang, Sze-Chun Tsang, Hei-Ting Tamar Wong, and Miu-Ling Lam
Cartoons and Beyond
2D Shading for Cel Animation
Matis Hudon, Rafael Pagés, Mairéad Grogan, Jan Ondřej, and Aljoša Smolić
ToonCap: A Layered Deformable Model for Capturing Poses From Cartoon Characters
Xinyi Fan, Amit H. Bermano, Vladimir G. Kim, Jovan Popović, and Szymon Rusinkiewicz
Automatic Generation of Geological Stories from a Single Sketch
Maxime Garcia, Marie-Paule Cani, Rémi Ronfard, Claude Gout, and Christian Perrenoud
Posters
An ego-altruist society
Pedro M. Cruz, and André B. Cunha
Approaches for Local Artistic Control of Mobile Neural Style Transfer
Max Reimann, Mandy Klingbeil, Sebastian Pasewaldt, Amir Semmo, Jürgen Döllner, and Matthias Trapp
Stylized Stereoscopic 3D Line Drawings from 3D Images
Lesley Istead and Craig S. Kaplan

BibTeX (Expressive 2018)
@inproceedings{
-,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
Expressive 2018: frontmatter}},
author = { year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
-}
}
@inproceedings{
10.1145:3229147.3229166,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
3D Sketching for Interactive Model Retrieval in Virtual Reality}},
author = {
Giunchi, Daniele
 and
James, Stuart
 and
Steed, Anthony
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229166}
}
@inproceedings{
10.1145:3229147.3229160,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
The Role of Grouping in Sketched Diagram Recognition}},
author = {
Ghodrati, Amirhossein
 and
Blagojevic, Rachel
 and
Guesgen, Hans W.
 and
Marsland, Stephen
 and
Plimmer, Beryl
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229160}
}
@inproceedings{
10.1145:3229147.3229154,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
Context-based Sketch Classification}},
author = {
Zhang, Jianhui
 and
Chen, Yilan
 and
Li, Lei
 and
Fu, Hongbo
 and
Tai, Chiew-Lan
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229154}
}
@inproceedings{
10.1145:3229147.3229155,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
Structuring and Layering Contour Drawings of Organic Shapes}},
author = {
Entem, Even
 and
Parakkat, Amal Dev
 and
Cani, Marie-Paule
 and
Barthe, Loïc
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229155}
}
@inproceedings{
10.1145:3229147.3229153,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
Seamless Reconstruction of Part-Based High-Relief Models from Hand-Drawn Images}},
author = {
Dvorožnák, Marek
 and
Nejad, Saman Sepehri
 and
Jamriška, Ondřej
 and
Jacobson, Alec
 and
Kavan, Ladislav
 and
Sýkora, Daniel
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229153}
}
@inproceedings{
10.1145:3229147.3229156,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
Sculpture Paintings}},
author = {
Arpa, Sami
 and
Süsstrunk, Sabine
 and
Hersch, Roger D.
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229156}
}
@inproceedings{
10.1145:3229147.3229164,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
Implicit Representation of Inscribed Volumes}},
author = {
Sahbaei, Parto
 and
Mould, David
 and
Wyvill, Brian
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229164}
}
@inproceedings{
10.1145:3229147.3229152,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
Abstract Depiction of Human and Animal Figures: Examples from Two Centuries of Art and Craft}},
author = {
Dodgson, Neil A.
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229152}
}
@inproceedings{
10.1145:3229147.3229162,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
MNPR: A Framework for Real-Time Expressive Non-Photorealistic Rendering of 3D Computer Graphics}},
author = {
Montesdeoca, Santiago E.
 and
Seah, Hock Soon
 and
Semmo, Amir
 and
Bénard, Pierre
 and
Vergne, Romain
 and
Thollot, Joëlle
 and
Benvenuti, Davide
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229162}
}
@inproceedings{
10.1145:3229147.3229163,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
Motion-coherent stylization with screen-space image filters}},
author = {
Bléron, Alexandre
 and
Vergne, Romain
 and
Hurtut, Thomas
 and
Thollot, Joëlle
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229163}
}
@inproceedings{
10.1145:3229147.3229158,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
Reducing Affective Responses to Surgical Images through Color Manipulation and Stylization}},
author = {
Besançon, Lonni
 and
Semmo, Amir
 and
Biau, David
 and
Frachet, Bruno
 and
Pineau, Virginie
 and
Sariali, El Hadi
 and
Taouachi, Rabah
 and
Isenberg, Tobias
 and
Dragicevic, Pierre
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229158}
}
@inproceedings{
10.1145:3229147.3229150,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
Brush Stroke Synthesis with a Generative Adversarial Network Driven by Physically Based Simulation}},
author = {
Wu, Rundong
 and
Chen, Zhili
 and
Wang, Zhaowen
 and
Yang, Jimei
 and
Marschner, Steve
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229150}
}
@inproceedings{
10.1145:3229147.3229165,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
Fluid Brush}},
author = {
Abraham, Sarah
 and
Vouga, Etienne
 and
Fussell, Donald
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229165}
}
@inproceedings{
10.1145:3229147.3229167,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
Computational Light Painting and Kinetic Photography}},
author = {
Huang, Yaozhun
 and
Tsang, Sze-Chun
 and
Wong, Hei-Ting Tamar
 and
Lam, Miu-Ling
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229167}
}
@inproceedings{
10.1145:3229147.3229148,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
2D Shading for Cel Animation}},
author = {
Hudon, Matis
 and
Pagés, Rafael
 and
Grogan, Mairéad
 and
Ondřej, Jan
 and
Smolić, Aljoša
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229148}
}
@inproceedings{
10.1145:3229147.3229149,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
ToonCap: A Layered Deformable Model for Capturing Poses From Cartoon Characters}},
author = {
Fan, Xinyi
 and
Bermano, Amit H.
 and
Kim, Vladimir G.
 and
Popović, Jovan
 and
Rusinkiewicz, Szymon
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229149}
}
@inproceedings{
10.1145:3229147.3229161,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
Automatic Generation of Geological Stories from a Single Sketch}},
author = {
Garcia, Maxime
 and
Cani, Marie-Paule
 and
Ronfard, Rémi
 and
Gout, Claude
 and
Perrenoud, Christian
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229161}
}
@inproceedings{
10.1145:3229147.3229191,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
An ego-altruist society}},
author = {
Cruz, Pedro M.
 and
Cunha, André B.
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229191}
}
@inproceedings{
10.1145:3229147.3229188,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
Approaches for Local Artistic Control of Mobile Neural Style Transfer}},
author = {
Reimann, Max
 and
Klingbeil, Mandy
 and
Pasewaldt, Sebastian
 and
Semmo, Amir
 and
Döllner, Jürgen
 and
Trapp, Matthias
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229188}
}
@inproceedings{
10.1145:3229147.3229189,
booktitle = {
Expressive: Computational Aesthetics, Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Rendering},
editor = {
Aydın, Tunç and Sýkora, Daniel
}, title = {{
Stylized Stereoscopic 3D Line Drawings from 3D Images}},
author = {
Istead, Lesley
 and
Kaplan, Craig S.
}, year = {
2018},
publisher = {
ACM},
ISSN = {2079-8679},
ISBN = {978-1-4503-5892-7},
DOI = {
10.1145/3229147.3229189}
}

Browse

Recent Submissions

Now showing 1 - 21 of 21
  • Item
    Expressive 2018: frontmatter
    (ACM, 2018) Aydın, Tunç and Sýkora, Daniel
  • Item
    3D Sketching for Interactive Model Retrieval in Virtual Reality
    (ACM, 2018) Giunchi, Daniele; James, Stuart; Steed, Anthony; Aydın, Tunç and Sýkora, Daniel
    We describe a novel method for searching 3D model collections using free-form sketches within a virtual environment as queries. As opposed to traditional sketch retrieval, our queries are drawn directly onto an example model. Using immersive virtual reality the user can express their query through a sketch that demonstrates the desired structure, color and texture. Unlike previous sketch-based retrieval methods, users remain immersed within the environment without relying on textual queries or 2D projections which can disconnect the user from the environment. We perform a test using queries over several descriptors, evaluating the precision in order to select the most accurate one. We show how a convolutional neural network (CNN) can create multi-view representations of colored 3D sketches. Using such a descriptor representation, our system is able to rapidly retrieve models and in this way, we provide the user with an interactive method of navigating large object datasets. Through a user study we demonstrate that by using our VR 3D model retrieval system, users can perform search more quickly and intuitively than with a naive linear browsing method. Using our system users can rapidly populate a virtual environment with specific models from a very large database, and thus the technique has the potential to be broadly applicable in immersive editing systems.
  • Item
    The Role of Grouping in Sketched Diagram Recognition
    (ACM, 2018) Ghodrati, Amirhossein; Blagojevic, Rachel; Guesgen, Hans W.; Marsland, Stephen; Plimmer, Beryl; Aydın, Tunç and Sýkora, Daniel
    An early phase of sketched diagram recognition systems consists of grouping digital ink into possible shapes. This survey presents the key literature on automatic grouping techniques in sketch recognition. In addition, we identify the major challenges in grouping ink into identifiable shapes, discuss the common solutions to these challenges based on current research, and highlight areas for future work.
  • Item
    Context-based Sketch Classification
    (ACM, 2018) Zhang, Jianhui; Chen, Yilan; Li, Lei; Fu, Hongbo; Tai, Chiew-Lan; Aydın, Tunç and Sýkora, Daniel
    We present a novel context-based sketch classification framework using relations extracted from scene images. Most of existing methods perform sketch classification by considering individually sketched objects and often fail to identify their correct categories, due to the highly abstract nature of sketches. For a sketched scene containing multiple objects, we propose to classify a sketched object by considering its surrounding context in the scene, which provides vital cues for resolving its recognition ambiguity. We learn such context knowledge from a database of scene images by summarizing the inter-object relations therein, such as co-occurrence, relative positions and sizes.We show that the context information can be used for both incremental sketch classification and sketch co-classification. Our method outperforms the state-of-the-art single-object classification method, evaluated on a new dataset of sketched scenes.
  • Item
    Structuring and Layering Contour Drawings of Organic Shapes
    (ACM, 2018) Entem, Even; Parakkat, Amal Dev; Cani, Marie-Paule; Barthe, Loïc; Aydın, Tunç and Sýkora, Daniel
    Complex vector drawings serve as convenient and expressive visual representations, but they remain difficult to edit or manipulate. For clean-line vector drawings of smooth organic shapes, we describe a method to automatically extract a layered structure for the drawn object from the current or nearby viewpoints. The layers correspond to salient regions of the drawing, which are often naturally associated to `parts' of the underlying shape. We present a method that automatically extracts salient structure, organized as parts with relative depth orderings, from clean-line vector drawings of smooth organic shapes. Our method handles drawings that contain complex internal contours with T-junctions indicative of occlusions, as well as internal curves that may either be expressive strokes or substructures. To extract the structure, we introduce a new part-aware metric for complex 2D drawings, the radial variation metric, which is used to identify salient sub-parts. These sub-parts are then considered in a priority-ordered fashion, which enables us to identify and recursively process new shape parts while keeping track of their relative depth ordering. The output is represented in terms of scalable vector graphics layers, thereby enabling meaningful editing and manipulation. We evaluate the method on multiple input drawings and show that the structure we compute is convenient for subsequent posing and animation from nearby viewpoints.
  • Item
    Seamless Reconstruction of Part-Based High-Relief Models from Hand-Drawn Images
    (ACM, 2018) Dvorožnák, Marek; Nejad, Saman Sepehri; Jamriška, Ondřej; Jacobson, Alec; Kavan, Ladislav; Sýkora, Daniel; Aydın, Tunç and Sýkora, Daniel
    We present a new approach to reconstruction of high-relief models from hand-made drawings. Our method is tailored to an interactive modeling scenario where the input drawing can be separated into a set of semantically meaningful parts of which relative depth order is known beforehand. For this kind of input, our technique allows inflating individual components to have a semi-elliptical profile, position them to satisfy prescribed depth order, and provide their seamless interconnection. As compared to previous similar frameworks our approach is the first that formulates this reconstruction process as a joint non-linear optimization problem. Although its direct optimization is computationally demanding we propose an approximative solution which delivers comparable results orders of magnitude faster enabling an interactive response. We evaluate our approach on various hand-made drawings and demonstrate that it provides stateof-the-art quality in comparison with previous methods which require comparable user intervention.
  • Item
    Sculpture Paintings
    (ACM, 2018) Arpa, Sami; Süsstrunk, Sabine; Hersch, Roger D.; Aydın, Tunç and Sýkora, Daniel
    We present a framework for automatically creating a type of artwork in which 2D and 3D contents are mixed within the same composition. These artworks create plausible effects for the viewers by showing a different relationship between 2D and 3D at each viewing angle. As the viewing angle is changed, we can clearly see 3D elements emerging from the scene. When creating such artwork, we face several challenges. The main challenge is to ensure the continuity between the 2D and the 3D parts in terms of geometry and colors. We provide a 3D synthetic environment in which the user selects the region of interest (ROI) from a given scene to be shown in 3D. Then we create a flat rendering grid that matches the topology of the ROI and attach the ROI to the rendering grid. Next we create textures for the flat part and the ROI. To enhance the continuity between the 2D and the 3D scene elements, we include bas-relief profiles around the ROI. Our framework can be used as a tool in order to assist artists in designing such sculpture paintings. Furthermore, it can be applied by amateur users to create decorative objects for exhibitions, souvenirs, and homes.
  • Item
    Implicit Representation of Inscribed Volumes
    (ACM, 2018) Sahbaei, Parto; Mould, David; Wyvill, Brian; Aydın, Tunç and Sýkora, Daniel
    We present an implicit approach for constructing smooth isolated or interconnected 3-D inscribed volumes which can be employed for volumetric modeling of various kinds of spongy or porous structures, such as volcanic rocks, pumice stones, Cancellus bones, liquid or dry foam, radiolarians, cheese, and other similar materials. The inscribed volumes can be represented in their normal or positive forms to model natural pebbles or pearls, or in their inverted or negative forms to be used in porous structures, but regardless of their types, their smoothness and sizes are controlled by the user without losing the consistency of the shapes. We introduce two techniques for blending and creating interconnections between these inscribed volumes to achieve a great flexibility to adapt our approach to different types of porous structures, whether they are regular or irregular. We begin with a set of convex polytopes such as 3-D Voronoi diagram cells and compute inscribed volumes bounded by the cells. The cells can be irregular in shape, scale, and topology, and this irregularity transfers to the inscribed volumes, producing natural-looking spongy structures. Describing the inscribed volumes with implicit functions gives us a freedom to exploit volumetric surface combinations and deformations operations effortlessly.
  • Item
    Abstract Depiction of Human and Animal Figures: Examples from Two Centuries of Art and Craft
    (ACM, 2018) Dodgson, Neil A.; Aydın, Tunç and Sýkora, Daniel
    The human figure is important in art. I discuss examples of the abstract depiction of the human figure and the challenge faced in attempting to mimic algorithmically what human artists can achieve. The challenge lies in the workings of the human brain: we have enormous knowledge about the world and a particular ability to make fine distinctions about other humans from posture, clothing and expression. This allows a human to make assumptions about human figures from a tiny amount of data, and allows a human artist to take advantage of this when creating art. We look at examples from impressionist and post-impressionist painting, from cross-stitch and knitting, from pixelated renderings in early video games, and from the stylisation used by the artists of children's books.
  • Item
    MNPR: A Framework for Real-Time Expressive Non-Photorealistic Rendering of 3D Computer Graphics
    (ACM, 2018) Montesdeoca, Santiago E.; Seah, Hock Soon; Semmo, Amir; Bénard, Pierre; Vergne, Romain; Thollot, Joëlle; Benvenuti, Davide; Aydın, Tunç and Sýkora, Daniel
    We propose a framework for expressive non-photorealistic rendering of 3D computer graphics: MNPR. Our work focuses on enabling stylization pipelines with a wide range of control, thereby covering the interaction spectrum with real-time feedback. In addition, we introduce control semantics that allow crossstylistic art-direction, which is demonstrated through our implemented watercolor, oil and charcoal stylizations. Our generalized control semantics and their style-specific mappings are designed to be extrapolated to other styles, by adhering to the same control scheme. We then share our implementation details by breaking down our framework and elaborating on its inner workings. Finally, we evaluate the usefulness of each level of control through a user study involving 20 experienced artists and engineers in the industry, who have collectively spent over 245 hours using our system. MNPR is implemented in Autodesk Maya and open-sourced through this publication, to facilitate adoption by artists and further development by the expressive research and development community.
  • Item
    Motion-coherent stylization with screen-space image filters
    (ACM, 2018) Bléron, Alexandre; Vergne, Romain; Hurtut, Thomas; Thollot, Joëlle; Aydın, Tunç and Sýkora, Daniel
    One of the qualities sought in expressive rendering is the 2D impression of the resulting style, called flatness. In the context of 3D scenes, screen-space stylization techniques are good candidates for flatness as they operate in the 2D image plane, after the scene has been rendered into so-called G-buffers. Various stylization filters can be applied in screen-space while making use of the geometrical information contained in G-buffers to ensure motion coherence. However, this means that filtering can only be done inside the rasterized surface of the object. This can be detrimental to some styles that require irregular silhouettes to be convincing. In this paper, we describe a post-processing pipeline that allows stylization filters to extend outside the rasterized footprint of the object by locally "inflating" the data contained in G-buffers. This pipeline is fully implemented on the GPU and can be evaluated at interactive rates. We show how common image filtering techniques, when integrated in our pipeline and in combination with G-buffer data, can be used to reproduce a wide range of "digitally-painted" appearances, such as directed brush strokes with irregular silhouettes, while keeping a degree of motion coherence.
  • Item
    Reducing Affective Responses to Surgical Images through Color Manipulation and Stylization
    (ACM, 2018) Besançon, Lonni; Semmo, Amir; Biau, David; Frachet, Bruno; Pineau, Virginie; Sariali, El Hadi; Taouachi, Rabah; Isenberg, Tobias; Dragicevic, Pierre; Aydın, Tunç and Sýkora, Daniel
    We present the first empirical study on using color manipulation and stylization to make surgery images more palatable. While aversion to such images is natural, it limits many people's ability to satisfy their curiosity, educate themselves, and make informed decisions. We selected a diverse set of image processing techniques, and tested them both on surgeons and lay people. While many artistic methods were found unusable by surgeons, edge-preserving image smoothing gave good results both in terms of preserving information (as judged by surgeons) and reducing repulsiveness (as judged by lay people). Color manipulation turned out to be not as effective.
  • Item
    Brush Stroke Synthesis with a Generative Adversarial Network Driven by Physically Based Simulation
    (ACM, 2018) Wu, Rundong; Chen, Zhili; Wang, Zhaowen; Yang, Jimei; Marschner, Steve; Aydın, Tunç and Sýkora, Daniel
    We introduce a novel approach that uses a generative adversarial network (GAN) to synthesize realistic oil painting brush strokes, where the network is trained with data generated by a high-fidelity simulator. Among approaches to digitally synthesizing natural media painting strokes, methods using physically based simulation by far produce the most realistic visual results and allow the most intuitive control of stroke variations. However, accurate physics simulations are known to be computationally expensive and often cannot meet the performance requirements of painting applications. A few existing simulation-based methods have managed to reach real-time performance at the cost of lower visual quality resulting from simplified models or lower resolution. In our work, we propose to replace the expensive fluid simulation with a neural network generator. The network takes the existing canvas and new brush trajectory information as input and produces the height and color of the paint surface as output. We build a large painting sample training dataset by feeding random strokes from artists' recordings into a high quality offline simulator. The network is able to produce visual quality comparable to the offline simulator with better performance than the existing real-time oil painting simulator. Finally, we implement a real-time painting system using the trained network with stroke splitting and patch blending and show artworks created with the system by artists. Our neural network approach opens up new opportunities for real-time applications of sophisticated and expensive physically based simulation.
  • Item
    Fluid Brush
    (ACM, 2018) Abraham, Sarah; Vouga, Etienne; Fussell, Donald; Aydın, Tunç and Sýkora, Daniel
    Digital media allows artists to create a wealth of visually-interesting effects that are impossible in traditional media. This includes temporal effects, such as cinemagraph animations, and expressive fluid effects. Yet these flexible and novel media often require highly technical expertise, which is outside a traditional artist's skill with paintbrush or pen. Fluid Brush acts a form of novel, digital media, which retains the brush-based interactions of traditional media, while expressing the movement of turbulent and laminar flow. As a digital media controlled through a non-technical interface, Fluid Brush allows for a novel form of painting that makes fluid effects accessible to novice users and traditional artists. To provide an informal demonstration of the medium's effects, applications, and accessibility, we asked designers, traditional artists, and digital artists to experiment with Fluid Brush. They produced a variety of works reflective of their artistic interests and backgrounds.
  • Item
    Computational Light Painting and Kinetic Photography
    (ACM, 2018) Huang, Yaozhun; Tsang, Sze-Chun; Wong, Hei-Ting Tamar; Lam, Miu-Ling; Aydın, Tunç and Sýkora, Daniel
    We present a computational framework for creating swept volume light painting and kinetic photography. Unlike conventional light painting technique using hand-held point light source or LED arrays, we move a flat-panel display with robot in a curved path. The display shows real-time rendered contours of a 3D object being sliced by the display plane along the path. All light contours are captured in a long exposure and constitute the virtual 3D object augmented in the real space. To ensure geometric accuracy, we use hand-eye calibration method to precisely obtain the transformation between the display and the robot. A path generation algorithm is developed to automatically yield the robot path that can best accommodate the 3D shape of the target model. To further avoid shape distortion due to asynchronization between the display's pose and the image content, we propose a real-time slicing method for arbitrary slicing direction. By organizing the triangular mesh into Octree data structure, the approach can significantly reduce the computational time and improve the performance of real-time rendering. We study the optimal tree level for different range of triangle numbers so as to attain competitive computational time.Texture mapping is also implemented to produce colored light painting. We extend our methodologies to computational kinetic photography, which is dual to light painting. Instead of keeping the camera stationary, we move the camera with robot and capture long exposures of a stationary display showing light contours. We transform the display path for light painting to the camera path for kinetic photography. A variety of 3D models are used to verify that the proposed techniques can produce stunning long exposures with high-fidelity volumetric imagery. The techniques have great potential for innovative applications including animation, visible light communication, invisible information visualization and creative art.
  • Item
    2D Shading for Cel Animation
    (ACM, 2018) Hudon, Matis; Pagés, Rafael; Grogan, Mairéad; Ondřej, Jan; Smolić, Aljoša; Aydın, Tunç and Sýkora, Daniel
    We present a semi-automatic method for creating shades and self-shadows in cel animation. Besides producing attractive images, shades and shadows provide important visual cues about depth, shapes, movement and lighting of the scene. In conventional cel animation, shades and shadows are drawn by hand. As opposed to previous approaches, this method does not rely on a complex 3D reconstruction of the scene: its key advantages are simplicity and ease of use. The tool was designed to stay as close as possible to the natural 2D creative environment and therefore provides an intuitive and user-friendly interface. Our system creates shading based on hand-drawn objects or characters, given very limited guidance from the user. The method employs simple yet very efficient algorithms to create shading directly out of drawn strokes. We evaluate our system through a subjective user study and provide qualitative comparison of our method versus existing professional tools and state of the art.
  • Item
    ToonCap: A Layered Deformable Model for Capturing Poses From Cartoon Characters
    (ACM, 2018) Fan, Xinyi; Bermano, Amit H.; Kim, Vladimir G.; Popović, Jovan; Rusinkiewicz, Szymon; Aydın, Tunç and Sýkora, Daniel
    Characters in traditional artwork such as children's books or cartoon animations are typically drawn once, in fixed poses, with little opportunity to change the characters' appearance or re-use them in a different animation. To enable these applications one can fit a consistent parametric deformable model - a puppet - to different images of a character, thus establishing consistent segmentation, dense semantic correspondence, and deformation parameters across poses. In this work we argue that a layered deformable puppet is a natural representation for hand-drawn characters, providing an effective way to deal with the articulation, expressive deformation, and occlusion that are common to this style of artwork. Our main contribution is an automatic pipeline for fitting these models to unlabeled images depicting the same character in various poses. We demonstrate that the output of our pipeline can be used directly for editing and re-targeting animations.
  • Item
    Automatic Generation of Geological Stories from a Single Sketch
    (ACM, 2018) Garcia, Maxime; Cani, Marie-Paule; Ronfard, Rémi; Gout, Claude; Perrenoud, Christian; Aydın, Tunç and Sýkora, Daniel
    Describing the history of a terrain from a vertical geological cross-section is an important problem in geology, called geological restoration. Designing the sequential evolution of the geometry is usually done manually, involving many trials and errors. In this work, we recast this problem as a storyboarding problem, where the different stages in the restoration are automatically generated as storyboard panels and displayed as geological stories. Our system allows geologists to interactively explore multiple scenarios by selecting plausible geological event sequences and backward simulating them at interactive rate, causing the terrain layers to be progressively un-deposited, un-eroded, un-compacted, un-folded and un-faulted. Storyboard sketches are generated along the way. When a restoration is complete, the storyboard panels can be used for automatically generating a forward animation of the terrain history, enabling quick visualization and validation of hypotheses. As a proof-of-concept, we describe how our system was used by geologists to restore and animate cross-sections in real examples at various spatial and temporal scales and with different levels of complexity, including the Chartreuse region in the French Alps.
  • Item
    An ego-altruist society
    (ACM, 2018) Cruz, Pedro M.; Cunha, André B.; Aydın, Tunç and Sýkora, Daniel
    This artwork is an artificial life simulation that shows how a society of agents flourishes with the symbiotic interactions between the egotist and altruist extremes. Egotist agents seek and absorb energy. Altruist agents seek other agents, share energy and reproduce. They group into multi-agent organisms that adapt to the energy present in the system.
  • Item
    Approaches for Local Artistic Control of Mobile Neural Style Transfer
    (ACM, 2018) Reimann, Max; Klingbeil, Mandy; Pasewaldt, Sebastian; Semmo, Amir; Döllner, Jürgen; Trapp, Matthias; Aydın, Tunç and Sýkora, Daniel
    This work presents enhancements to state-of-the-art adaptive neural style transfer techniques, thereby providing a generalized user interface with creativity tool support for lower-level local control to facilitate the demanding interactive editing on mobile devices. The approaches are implemented in a mobile app that is designed for orchestration of three neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors to perform location-based filtering and direct the composition. Based on first user tests, we conclude with insights, showing different levels of satisfaction for the implemented techniques and user interaction design, pointing out directions for future research.
  • Item
    Stylized Stereoscopic 3D Line Drawings from 3D Images
    (ACM, 2018) Istead, Lesley; Kaplan, Craig S.; Aydın, Tunç and Sýkora, Daniel
    Stereoscopic 3D (S3D) line drawings were introduced by Sir Charles Wheatstone in 1838. S3D line drawings persist today in various art forms, such as comic books. Stereoscopic 3D line drawings may be hand-drawn or generated from 3D meshes using a variety of algorithms. When creating these drawings, emphasis is placed on consistency: ensuring that the object/scene visible in both views matches exactly for a comfortable viewing experience and accurate depiction of depth [Northam et al. 2013]. While producing S3D line drawings from S3D photos has not been studied in depth, several methods do exist. Kim et al. describe a method for producing stylized stereoscopic 3D line drawings from S3D photographs [Kim et al. 2012]. Their paper applies Canny edge detection to the edge tangent field [Kang et al. 2007] of the left stereo image and warps the discovered edges to the right image using the disparity map. However, the rendered lines are from all edges that can be found in the actual image, including object contours as well as texture or lighting contours. By contrast, a hand-drawn stereoscopic 3D line drawing would be likely to include only object contours and creases. In previous work, we explored the stylization of S3D images by decomposing an image into a set of disparity layers [Northam et al. 2013]. However, that would be ineffective here because while applying the Canny edge detector to the disparity mapwould isolate object contours from texture or lighting contours, the layers would only contain pixels of a single disparity. Hence, there would be no edges to find in each layer. We present a method to produce stylized stereoscopic 3D line drawings from 3D photos that only depicts object contours similar to traditional line drawings. Since contours alone can be insufficient to communicate 3D shape, we also provide the option of adding shading to our drawings to clarify shape and enhance the perception of depth.