EG 2018 - Posters

Permanent URI for this collection

Posters
A Smart Palette for Helping Novice Painters to Mix Physical Watercolor Pigments
Mei-Yun Chen, Ci-Syuan Yang, and Ming Ouhyoung
A Probabilistic Motion Planning Algorithm for Realistic Walk Path Simulation
Philipp Agethen, Thomas Neher, Felix Gaisbauer, Martin Manns, and Enrico Rukzio
Presenting a Deep Motion Blending Approach for Simulating Natural Reach Motions
Felix Gaisbauer, Philipp Froehlich, Jannes Lehwald, Philipp Agethen, and Enrico Rukzio
Introducing a Modular Concept for Exchanging Character Animation Approaches
Felix Gaisbauer, Philipp Agethen, Thomas Bär, and Enrico Rukzio
Towards Self-Perception in Augmented Virtuality: Hand Segmentation with Fully Convolutional Networks
Ester Gonzalez-Sosa, Pablo Perez, Redouane Kachach, Jaime Jesus Ruiz, and Alvaro Villegas
Human Sensitivity to Light Zones in Virtual Scenes
Tatiana Kartashova, Huib de Ridder, Susan F. te Pas, and Sylvia C. Pont
Smoothing Noisy Skeleton Data in Real Time
Thomas Hoxey and Ian Stephenson
A Drink in Mars: an Approach to Distributed Reality
Pablo Perez, Ester Gonzalez-Sosa, Redouane Kachach, Jaime Jesus Ruiz, and Alvaro Villegas
RIFNOM: 3D Rotation-Invariant Features on Normal Maps
Akihiro Nakamura, Leo Miyashita, Yoshihiro Watanabe, and Masatoshi Ishikawa
Light Field Synthesis from a Single Image using Improved Wasserstein Generative Adversarial Network
Lingyan Ruan, Bin Chen, and Miu Ling Lam
Growing Circles: A Region Growing Algorithm for Unstructured Grids and Non-aligned Boundaries
Saeed Dabbaghchian
Audio-driven Emotional Speech Animation
Constantinos Charalambous, Zerrin Yumak, and A. Frank van der Stappen
Exemplar Based Filtering of 2.5D Meshes of Faces
Leandro Dihl, Leandro Cruz, and Nuno Gonçalves
A Multifragment Renderer for Material Aging Visualization
Georgios Adamopoulos, Anastasia Moutafidou, Anastasios Drosou, Dimitrios Tzovaras, and Ioannis Fudos
Boundary-aided Human Body Shape and Pose Estimation from a Single Image for Garment Design and Manufacture
Zongyi Xu and Qianni Zhang
Style Translation to Create Child-like Motion
Yuzhu Dong, Aishat Aloba, Lisa Anthony, and Eakta Jain
A Virtual Space with Real IoT Data
Adam Faiers, Thomas Hoxey, and Ian Stephenson
From Spectra to Perceptual Color: Visualization Tools for the Dimensional Reduction Achieved by the Human Color Sense
Joshua S. Harvey, Clive R. Siviour, and Hannah E. Smithson

Browse

Recent Submissions

Now showing 1 - 19 of 19
  • Item
    EUROGRAPHICS 2018: Posters Frontmatter
    (Eurographics Association, 2018) Jain, Eakta; Kosinka, Jirí; Jain, Eakta; Kosinka, Jirí
  • Item
    Presenting a Deep Motion Blending Approach for Simulating Natural Reach Motions
    (The Eurographics Association, 2018) Gaisbauer, Felix; Froehlich, Philipp; Lehwald, Jannes; Agethen, Philipp; Rukzio, Enrico; Jain, Eakta and Kosinka, Jirí
    Motion blending and character animation systems are widely used in different domains such as gaming or simulation within production industries. Most of the established approaches are based on motion blending techniques. These approaches provide natural motions within common scenarios while inducing low computational costs. However, with increasing amount of influence parameters and constraints such as collision-avoidance, they increasingly fail or require a vast amount of time to meet these requirements. With ongoing progress in artificial intelligence and neural networks, recent works present deep learning based approaches for motion synthesis, which offer great potential for modeling natural motions, while considering heterogeneous influence factors. In this paper, we propose a novel deep blending approach to simulate non-cyclical natural reach motions based on an extension of phase functioned deep neural networks.
  • Item
    A Probabilistic Motion Planning Algorithm for Realistic Walk Path Simulation
    (The Eurographics Association, 2018) Agethen, Philipp; Neher, Thomas; Gaisbauer, Felix; Manns, Martin; Rukzio, Enrico; Jain, Eakta and Kosinka, Jirí
    This paper presents an approach that combines a hybrid A* path planner with a statistical motion graph to effectively generate a rich repertoire of walking trajectories. The motion graph is generated from a comprehensive database (20 000 steps) of captured human motion and covers a wide range of gait variants. The hybrid A* path planner can be regarded as an orchestrationinstance, stitching together succeeding left and right steps, which were drawn from the statistical motion model. Moreover, the hybrid A* planner ensures a collision-free path between a start and an end point. A preliminary evaluation underlines the evident benefits of the proposed algorithm.
  • Item
    A Smart Palette for Helping Novice Painters to Mix Physical Watercolor Pigments
    (The Eurographics Association, 2018) Chen, Mei-Yun; Yang, Ci-Syuan; Ouhyoung, Ming; Jain, Eakta and Kosinka, Jirí
    For novice painters, color mixing is a necessary skill which takes many years to learn. To get the skill easily, we design a system, a smart palette, to help them learn quickly. Our system is based on physical watercolor pigments, and we use a spectrometer to measure the transmittance and reflectance of watercolor pigments and collect a color mixing dataset. Moreover, we use deep neural network (DNN) to train a color mixing model. After that, using the model to predict a large amount of color mixing data creates a lookup table for color matching. In the smart palette, users can select a target color from an input image; then, the smart palette will find the nearest color, which is a matched color, and show a recipe where two pigments and their respective quantities can be mixed to get that color.
  • Item
    Introducing a Modular Concept for Exchanging Character Animation Approaches
    (The Eurographics Association, 2018) Gaisbauer, Felix; Agethen, Philipp; Bär, Thomas; Rukzio, Enrico; Jain, Eakta and Kosinka, Jirí
    Nowadays, motion synthesis and character animation systems are used in different domains ranging from gaming to medicine and production industries. In recent years, there has been a vast progress in terms of realistic character animation. In this context, motion-capture based animation systems are frequently used to generate natural motions. Other approaches use physics based simulation, statistical models or machine learning methods to generate realistic motions. These approaches are however tightly coupled with the development environment, thus inducing high porting efforts if being incorporated into different platforms. Currently, no standard exists which allows to exchange complex character animation approaches. A comprehensive simulation of complex scenarios utilizing these heterogeneous approaches is therefore not possible, yet. In a different domain than motion, the Functional Mock-up Interface standard has already solved this problem. Initially being tailored to industrial needs, the standards allows to exchange dynamic simulation approaches such as solvers for mechatronic components. We present a novel concept, extending this standard to couple arbitrary character animation approaches using a common interface.
  • Item
    Towards Self-Perception in Augmented Virtuality: Hand Segmentation with Fully Convolutional Networks
    (The Eurographics Association, 2018) Gonzalez-Sosa, Ester; Perez, Pablo; Kachach, Redouane; Ruiz, Jaime Jesus; Villegas, Alvaro; Jain, Eakta and Kosinka, Jirí
    In this work, we propose the use of deep learning techniques to segment items of interest from the local region to increase self-presence in Virtual Reality (VR) scenarios. Our goal is to segment hand images from the perspective of a user wearing a VR headset. We create the VR Hand Dataset, composed of more than 10:000 images, including variations of hand position, scenario, outfits, sleeve and people. We also describe the procedure followed to automatically generate groundtruth images and create synthetic images. Preliminary results look promising.
  • Item
    Smoothing Noisy Skeleton Data in Real Time
    (The Eurographics Association, 2018) Hoxey, Thomas; Stephenson, Ian; Jain, Eakta and Kosinka, Jirí
    The aim of this project is to be able to visualise live skeleton tracking data in a virtual analogue of a real world environment, to be viewed in VR. Using a single RGBD camera motion tracking method is a cost effective way to get real time 3D skeleton tracking data. Not only this but people being tracked don't need any special markers. This makes it much more practical for use in a non studio or lab environment. However the skeleton it provides is not as accurate as a traditional multiple camera system. With a single fixed view point the body can easily occlude itself, for example by standing side on to the camera. Secondly without marked tracking points there can be inconsistencies with where the joints are identified, leading to inconsistent body proportions. In this paper we outline a method for improving the quality of motion capture data in real time, providing an off the shelf framework for importing the data into a virtual scene. Our method uses a two stage approach to smooth smaller inconsistencies and try to estimate the position of improperly proportioned or occluded joints.
  • Item
    Human Sensitivity to Light Zones in Virtual Scenes
    (The Eurographics Association, 2018) Kartashova, Tatiana; Ridder, Huib de; Pas, Susan F. te; Pont, Sylvia C.; Jain, Eakta and Kosinka, Jirí
    We investigated perception of light properties in scenes containing volumes with dramatically different light properties (direction, intensity, diffuseness). Each scene had two light zones, defined as distinct spatial groupings of lighting variables significant to the space- and form-giving characteristics of light [Mad07]. The results show that human observers are more sensitive to differences in illumination between two parts of a scene when the differences occur in the picture plane than in depth of the scene. We discuss implications for and possible applications of our results in computer graphics.
  • Item
    A Drink in Mars: an Approach to Distributed Reality
    (The Eurographics Association, 2018) Perez, Pablo; Gonzalez-Sosa, Ester; Kachach, Redouane; Ruiz, Jaime Jesus; Villegas, Alvaro; Jain, Eakta and Kosinka, Jirí
    We have developed A Drink in Mars application as a proof of concept of Distributed Reality, a particularisation of Mixed Reality which combines a reality transmitted from a remote place (e.g. live immersive video stream from Mars) with user interaction with the local reality (e.g. drink their favourite beverage). The application shows acceptable immersion and local interactivity. It runs on Samsung GearVR and needs no special green room for chroma keying, thus being suitable to test different use cases.
  • Item
    Light Field Synthesis from a Single Image using Improved Wasserstein Generative Adversarial Network
    (The Eurographics Association, 2018) Ruan, Lingyan; Chen, Bin; Lam, Miu Ling; Jain, Eakta and Kosinka, Jirí
    We present a deep learning-based method to synthesize a 4D light field from a single 2D RGB image. We consider the light field synthesis problem equivalent to image super-resolution, and solve it by using the improved Wasserstein Generative Adversarial Network with gradient penalty (WGAN-GP). Experimental results demonstrate that our algorithm can predict complex occlusions and relative depths in challenging scenes. The light fields synthesized by our method has much higher signal-to-noise ratio and structural similarity than the state-of-the-art approach.
  • Item
    RIFNOM: 3D Rotation-Invariant Features on Normal Maps
    (The Eurographics Association, 2018) Nakamura, Akihiro; Miyashita, Leo; Watanabe, Yoshihiro; Ishikawa, Masatoshi; Jain, Eakta and Kosinka, Jirí
    This paper presents 3D rotation-invariant features on normal maps: RIFNOM.We assign a local coordinate system (CS) to each pixel by using neighbor normals to extract the 3D rotation-invariant features. These features can be used to perform interest point matching between normal maps. We can estimate 3D rotations between corresponding interest points by comparing local CSs. Experiments with normal maps of a rigid object showed the performance of the proposed method in estimating 3D rotations. We also applied the proposed method to a non-rigid object. By estimating 3D rotations between corresponding interest points, we successfully detected deformation of the object.
  • Item
    Growing Circles: A Region Growing Algorithm for Unstructured Grids and Non-aligned Boundaries
    (The Eurographics Association, 2018) Dabbaghchian, Saeed; Jain, Eakta and Kosinka, Jirí
    Detecting the boundaries of an enclosed region is a problem which arises in some applications such as the human upper airway modeling. Using of standard algorithms fails because of the inevitable errors, i.e. gaps and overlaps between the surrounding boundaries. Growing circles is an automatic approach to address this problem. A circle is centered inside the region and starts to grow by increasing its radius. Its growth is limited either by the surrounding boundaries or by reaching its maximum radius. To deal with complex shapes, many circles are used in which each circle partially reconstructs the region, and the whole region is determined by the union of these partial regions. The center of the circles and their maximum radius are calculated adaptively. It is similar to the region growing algorithm which is widely used in image processing applications. However, it works for unstructured grids as well as Cartesian ones. As an application of the method, it is applied to detect the boundaries of the upper airway cross-sections.
  • Item
    Audio-driven Emotional Speech Animation
    (The Eurographics Association, 2018) Charalambous, Constantinos; Yumak, Zerrin; Stappen, A. Frank van der; Jain, Eakta and Kosinka, Jirí
    We propose a procedural audio-driven speech animation method that takes into account emotional variations in speech. Given any audio with its corresponding speech transcript, the method generates speech animation for any 3D character. The expressive speech model matches the pitch and intensity variations in audio to individual visemes. In addition, we introduce a dynamic co-articulation model that takes into account linguistic rules varying among emotions. We test our approach against two popular speech animation tools and we show that our method surpass them in a perceptual experiment.
  • Item
    Exemplar Based Filtering of 2.5D Meshes of Faces
    (The Eurographics Association, 2018) Dihl, Leandro; Cruz, Leandro; Gonçalves, Nuno; Jain, Eakta and Kosinka, Jirí
    In this work, we present a content-aware filtering for 2.5D meshes of faces. We propose an exemplar-based filter that corrects each point of a given mesh through local model-exemplar neighborhood comparison. We take advantage of prior knowledge of the models (faces) to improve the comparison. We first detect facial feature points, and create the point correctors for regions of each feature, and only use the correspondent regions for correcting a point of the filtered mesh.
  • Item
    A Multifragment Renderer for Material Aging Visualization
    (The Eurographics Association, 2018) Adamopoulos, Georgios; Moutafidou, Anastasia; Drosou, Anastasios; Tzovaras, Dimitrios; Fudos, Ioannis; Jain, Eakta and Kosinka, Jirí
    People involved in curatorial work and in preservation/conservation tasks need to understand exactly the nature of aging and to prevent it with minimal preservation work. In this scenario, it is of extreme importance to have tools to produce and visualize digital representations and models of visual surface appearance and material properties, to help the scientist understand how they evolve over time and under particular environmental conditions. We report on the development of a multifragment renderer for visualizing and combining the results of simulated aging of artwork objects. Several natural aging processes manifest themselves through change of color, fading, deformations or cracks. Furthermore, changes in the materials underneath the visible layers may be detected or simulated.
  • Item
    Style Translation to Create Child-like Motion
    (The Eurographics Association, 2018) Dong, Yuzhu; Aloba, Aishat; Anthony, Lisa; Jain, Eakta; Jain, Eakta and Kosinka, Jirí
    Animated child characters are increasingly important for educational and entertainment content geared towards younger users. While motion capture technology creates realistic and believable motion for adult characters, the same type of data is hard to collect for young children. We aim to algorithmically transform adult motion capture data to look child-like. We implemented a warping based style translation algorithm, and show the results when this algorithm is applied to adult to child transformation.
  • Item
    A Virtual Space with Real IoT Data
    (The Eurographics Association, 2018) Faiers, Adam; Hoxey, Thomas; Stephenson, Ian; Jain, Eakta and Kosinka, Jirí
    Large quantities of live data about an environment can be easily and cheaply collected using a network of small sensors (IoT). However these sensors typically do not display any information directly, and it can be difficult to understand the data collected. Conversely VR environments used for training, require scenarios to be created, populated with rich data. By linking the VR system directly to the IoT data broker we import the live (or recorded) status of real hardware from an industrial environment into the virtual world allowing a remote viewer to monitor the operation of the system.
  • Item
    Boundary-aided Human Body Shape and Pose Estimation from a Single Image for Garment Design and Manufacture
    (The Eurographics Association, 2018) Xu, Zongyi; Zhang, Qianni; Jain, Eakta and Kosinka, Jirí
    Current virtual clothing design applications mainly use predefined virtual avatars which are created by professionals. The models are unrealistic as they lack the personalised body shapes and the simulation of human body muscle and soft tissue. To address this problem, we firstly fit the state-of-the-art parametric 3D human body model, SMPL, to 2D joints and boundary of the human body which are detected using CNN methods automatically. Considering the scenario of virtual dressing where people are usually in stable poses, we define a stable pose prior from CMU motion capture (mocap) dataset for further improving accuracy of pose estimation. Accurate estimation of human body shape and poses provides manufacturers and designers with more comprehensive human body measurements, which put a step forwards clothing design and manufacture through Internet.
  • Item
    From Spectra to Perceptual Color: Visualization Tools for the Dimensional Reduction Achieved by the Human Color Sense
    (The Eurographics Association, 2018) Harvey, Joshua S.; Siviour, Clive R.; Smithson, Hannah E.; Jain, Eakta and Kosinka, Jirí
    Physical colors, defined as unique combinations of photon populations whose wavelengths lie in the visible range, occupy an infinite-dimensional real Hilbert space. Any number of photon populations from the continuous spectrum of monochromatic wavelengths may be present to any positive amount. For normal vision, this space collapses to three dimensions at the retina, with any physical color eliciting just three sensor values corresponding to the excitations of the three classes of cone photoreceptor cells. While there are many mappings and visualizations of three-dimensional perceptual color space, attempts to visualize the relationship between infinite-dimensional physical color space and perceptual space are lacking. We present a visualization framework to illustrate this mathematical relation, using animation and transparency to map multiple physical colors to locations in perceptual space, revealing how the perceptual color solid can be understood as intersecting surfaces and volumes. This framework provides a clear and intuitive illustration of color metamerism.