40-Issue 2

Permanent URI for this collection

Geometry and Transformations
Restricted Power Diagrams on the GPU
Justine Basselin, Laurent Alonso, Nicolas Ray, Dmitry Sokolov, Sylvain Lefebvre, and Bruno Lévy
Fast Updates for Least-Squares Rotational Alignment
Jiayi Eris Zhang, Alec Jacobson, and Marc Alexa
Navigating and Exploring Images and Videos
Real-Time Frequency Adjustment of Images and Videos
Rafael L. Germano, Manuel M. Oliveira, and Eduardo S. L. Gastal
3D and Beyond
Coherent Mark-based Stylization of 3D Scenes at the Compositing Stage
Maxime Garcia, Romain Vergne, Mohamed-Amine Farhat, Pierre Bénard, Camille Noûs, and Joëlle Thollot
Higher Dimensional Graphics: Conceiving Worlds in Four Spatial Dimensions and Beyond
Marco Cavallo
Texture Defragmentation for Photo-Reconstructed 3D Models
Andrea Maggiordomo, Paolo Cignoni, and Marco Tarini
Rendering
Temporally Reliable Motion Vectors for Real-time Ray Tracing
Zheng Zeng, Shiqiu Liu, Jinglei Yang, Lu Wang, and Ling-Qi Yan
Rank-1 Lattices for Efficient Path Integral Estimation
Hongli Liu, Honglei Han, and Min Jiang
A Multiscale Microfacet Model Based on Inverse Bin Mapping
Asen Atanasov, Alexander Wilkie, Vladimir Koylazov, and Jaroslav Krivánek
Generative Models
Semantics-Guided Latent Space Exploration for Shape Generation
Tansin Jahan, Yanran Guan, and Oliver van Kaick
Towards a Neural Graphics Pipeline for Controllable Image Generation
Xuelin Chen, Daniel Cohen-Or, Baoquan Chen, and Niloy J. Mitra
Write Like You: Synthesizing Your Cursive Online Chinese Handwriting via Metric-based Meta Learning
Shusen Tang and Zhouhui Lian
Deep Rendering
Practical Face Reconstruction via Differentiable Ray Tracing
Abdallah Dib, Gaurav Bharaj, Junghyun Ahn, Cédric Thébault, Philippe Gosselin, Marco Romeo, and Louis Chevallier
Learning Multiple-Scattering Solutions for Sphere-Tracing of Volumetric Subsurface Effects
Ludwic Leonard, Kevin Höhlein, and Rüdiger Westermann
Deep HDR Estimation with Generative Detail Reconstruction
Yang Zhang and Tunc O. Aydin
Fabrication
Automatic Surface Segmentation for Seamless Fabrication Using 4-axis Milling Machines
Stefano Nuvoli, Alessandro Tola, Alessandro Muntoni, Nico Pietroni, Enrico Gobbetti, and Riccardo Scateni
Neural Acceleration of Scattering-Aware Color 3D Printing
Tobias Rittig, Denis Sumin, Vahid Babaei, Piotr Didyk, Alexey Voloboy, Alexander Wilkie, Bernd Bickel, Karol Myszkowski, Tim Weyrich, and Jaroslav Krivánek
Levitating Rigid Objects with Hidden Rods and Wires
Sarah Kushner, Risa Ulinski, Karan Singh, David I. W. Levin, and Alec Jacobson
Sampling Theory
Correlation-Aware Multiple Importance Sampling for Bidirectional Rendering Algorithms
Pascal Grittmann, Iliyan Georgiev, and Philipp Slusallek
Cyclostationary Gaussian Noise: Theory and Synthesis
Nicolas Lutz, Basile Sauvage, and Jean-Michel Dischler
Learning Pose Manifolds and Motor Skills
Learning and Exploring Motor Skills with Spacetime Bounds
Li-Ke Ma, Zeshi Yang, Xin Tong, Baining Guo, and KangKang Yin
LoBSTr: Real-time Lower-body Pose Prediction from Sparse Upper-body Tracking Signals
Dongseok Yang, Doyeon Kim, and Sung-Hee Lee
Mesh Generation
Layout Embedding via Combinatorial Optimization
Janis Born, Patrick Schmidt, and Leif Kobbelt
Geometric Construction of Auxetic Metamaterials
Georges-Pierre Bonneau, Stefanie Hahmann, and Johana Marku
Quad Layouts via Constrained T-Mesh Quantization
Max Lyon, Marcel Campen, and Leif Kobbelt
Material Acquisition and Estimation
Adversarial Single-Image SVBRDF Estimation with Hybrid Training
Xilong Zhou and Nima Khademi Kalantari
Perceptual Quality of BRDF Approximations: Dataset and Metrics
Guillaume Lavoué, Nicolas Bonneel, Jean-Philippe Farrugia, and Cyril Soler
Fluids
Honey, I Shrunk the Domain: Frequency-aware Force Field Reduction for Efficient Fluids Optimization
Jingwei Tang, Vinicius C. Azevedo, Guillaume Cordonnier, and Barbara Solenthaler
Two-step Temporal Interpolation Network Using Forward Advection for Efficient Smoke Simulation
Young Jin Oh and In-Kwon Lee
Patch Erosion for Deformable Lapped Textures on 3D Fluids
Jonathan Gagnon, Julián E. Guzmán, David Mould, and Eric Paquette
Learning from Human Motion
Walk2Map: Extracting Floor Plans from Indoor Walk Trajectories
Claudio Mura, Renato Pajarola, Konrad Schindler, and Niloy Mitra
Learning Human Search Behavior from Egocentric Visual Inputs
Maks Sorokin, Wenhao Yu, Sehoon Ha, and C. Karen Liu
Deep Detail Enhancement for Any Garment
Meng Zhang, Tuanfeng Wang, Duygu Ceylan, and Niloy J. Mitra
Visualization
Enabling Viewpoint Learning through Dynamic Label Generation
Michael Schelling, Pedro Hermosilla, Pere-Pau Vázquez, and Timo Ropinski
Blue Noise Plots
Christian van Onzenoodt, Gurprit Singh, Timo Ropinski, and Tobias Ritschel
Shape Analysis
Orthogonalized Fourier Polynomials for Signal Approximation and Transfer
Filippo Maggioli, Simone Melzi, Maks Ovsjanikov, Michael M. Bronstein, and Emanuele Rodolà
Physically-based Simulation
Physically-based Book Simulation with Freeform Developable Surfaces
Thomas Wolf, Victor Cornillère, and Olga Sorkine-Hornung
Flow Visualization
Curve Complexity Heuristic KD-trees for Neighborhood-based Exploration of 3D Curves
Yucheng Lu, Luyu Cheng, Tobias Isenberg, Chi-Wing Fu, Guoning Chen, Hui Liu, Oliver Deussen, and Yunhai Wang
Data Structures
SnakeBinning: Efficient Temporally Coherent Triangle Packing for Shading Streaming
Jozef Hladky, Hans-Peter Seidel, and Markus Steinberger
Hierarchical Raster Occlusion Culling
Gi Beom Lee, Moonsoo Jeong, Yechan Seok, and Sungkil Lee
Analyzing and Integrating RGB-D Images
Interactive Photo Editing on Smartphones via Intrinsic Decomposition
Sumit Shekhar, Max Reimann, Maximilian Mayer, Amir Semmo, Sebastian Pasewaldt, Jürgen Döllner, and Matthias Trapp
RigidFusion: RGB-D Scene Reconstruction with Rigidly-moving Objects
Yu-Shiang Wong, Changjian Li, Matthias Nießner, and Niloy J. Mitra
Spatiotemporal Texture Reconstruction for Dynamic Objects Using a Single RGB-D Camera
Hyomin Kim, Jungeon Kim, Hyeonseo Nam, Jaesik Park, and Seungyong Lee
Skinning and Deformation
MultiResGNet: Approximating Nonlinear Deformation via Multi-Resolution Graphs
Tianxing Li, Rui Shi, and Takashi Kanai
Velocity Skinning for Real-time Stylized Skeletal Animation
Damien Rohmer, Marco Tarini, Niranjan Kalyanasundaram, Faezeh Moshfeghifar, Marie-Paule Cani, and Victor Zordan
Expressive Modeling
STALP: Style Transfer with Auxiliary Limited Pairing
David Futschik, Michal Kucera, Mike Lukác, Zhaowen Wang, Eli Shechtman, and Daniel Sýkora
Local Light Alignment for Multi-Scale Shape Depiction
Nolan Mestres, Romain Vergne, Camille Noûs, and Joëlle Thollot

BibTeX (40-Issue 2)
                
@article{
10.1111:cgf.142610,
journal = {Computer Graphics Forum}, title = {{
Restricted Power Diagrams on the GPU}},
author = {
Basselin, Justine
 and
Alonso, Laurent
 and
Ray, Nicolas
 and
Sokolov, Dmitry
 and
Lefebvre, Sylvain
 and
Lévy, Bruno
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142610}
}
                
@article{
10.1111:cgf.142611,
journal = {Computer Graphics Forum}, title = {{
Fast Updates for Least-Squares Rotational Alignment}},
author = {
Zhang, Jiayi Eris
 and
Jacobson, Alec
 and
Alexa, Marc
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142611}
}
                
@article{
10.1111:cgf.142612,
journal = {Computer Graphics Forum}, title = {{
Real-Time Frequency Adjustment of Images and Videos}},
author = {
Germano, Rafael L.
 and
Oliveira, Manuel M.
 and
Gastal, Eduardo S. L.
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142612}
}
                
@article{
10.1111:cgf.142613,
journal = {Computer Graphics Forum}, title = {{
Coherent Mark-based Stylization of 3D Scenes at the Compositing Stage}},
author = {
Garcia, Maxime
 and
Vergne, Romain
 and
Farhat, Mohamed-Amine
 and
Bénard, Pierre
 and
Noûs, Camille
 and
Thollot, Joëlle
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142613}
}
                
@article{
10.1111:cgf.142614,
journal = {Computer Graphics Forum}, title = {{
Higher Dimensional Graphics: Conceiving Worlds in Four Spatial Dimensions and Beyond}},
author = {
Cavallo, Marco
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142614}
}
                
@article{
10.1111:cgf.142615,
journal = {Computer Graphics Forum}, title = {{
Texture Defragmentation for Photo-Reconstructed 3D Models}},
author = {
Maggiordomo, Andrea
 and
Cignoni, Paolo
 and
Tarini, Marco
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142615}
}
                
@article{
10.1111:cgf.142616,
journal = {Computer Graphics Forum}, title = {{
Temporally Reliable Motion Vectors for Real-time Ray Tracing}},
author = {
Zeng, Zheng
 and
Liu, Shiqiu
 and
Yang, Jinglei
 and
Wang, Lu
 and
Yan, Ling-Qi
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142616}
}
                
@article{
10.1111:cgf.142617,
journal = {Computer Graphics Forum}, title = {{
Rank-1 Lattices for Efficient Path Integral Estimation}},
author = {
Liu, Hongli
 and
Han, Honglei
 and
Jiang, Min
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142617}
}
                
@article{
10.1111:cgf.142618,
journal = {Computer Graphics Forum}, title = {{
A Multiscale Microfacet Model Based on Inverse Bin Mapping}},
author = {
Atanasov, Asen
 and
Wilkie, Alexander
 and
Koylazov, Vladimir
 and
Krivánek, Jaroslav
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142618}
}
                
@article{
10.1111:cgf.142619,
journal = {Computer Graphics Forum}, title = {{
Semantics-Guided Latent Space Exploration for Shape Generation}},
author = {
Jahan, Tansin
 and
Guan, Yanran
 and
Kaick, Oliver van
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142619}
}
                
@article{
10.1111:cgf.142620,
journal = {Computer Graphics Forum}, title = {{
Towards a Neural Graphics Pipeline for Controllable Image Generation}},
author = {
Chen, Xuelin
 and
Cohen-Or, Daniel
 and
Chen, Baoquan
 and
Mitra, Niloy J.
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142620}
}
                
@article{
10.1111:cgf.142621,
journal = {Computer Graphics Forum}, title = {{
Write Like You: Synthesizing Your Cursive Online Chinese Handwriting via Metric-based Meta Learning}},
author = {
Tang, Shusen
 and
Lian, Zhouhui
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142621}
}
                
@article{
10.1111:cgf.142622,
journal = {Computer Graphics Forum}, title = {{
Practical Face Reconstruction via Differentiable Ray Tracing}},
author = {
Dib, Abdallah
 and
Bharaj, Gaurav
 and
Ahn, Junghyun
 and
Thébault, Cédric
 and
Gosselin, Philippe
 and
Romeo, Marco
 and
Chevallier, Louis
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142622}
}
                
@article{
10.1111:cgf.142623,
journal = {Computer Graphics Forum}, title = {{
Learning Multiple-Scattering Solutions for Sphere-Tracing of Volumetric Subsurface Effects}},
author = {
Leonard, Ludwic
 and
Höhlein, Kevin
 and
Westermann, Rüdiger
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142623}
}
                
@article{
10.1111:cgf.142624,
journal = {Computer Graphics Forum}, title = {{
Deep HDR Estimation with Generative Detail Reconstruction}},
author = {
Zhang, Yang
 and
Aydin, Tunc O.
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142624}
}
                
@article{
10.1111:cgf.142625,
journal = {Computer Graphics Forum}, title = {{
Automatic Surface Segmentation for Seamless Fabrication Using 4-axis Milling Machines}},
author = {
Nuvoli, Stefano
 and
Tola, Alessandro
 and
Muntoni, Alessandro
 and
Pietroni, Nico
 and
Gobbetti, Enrico
 and
Scateni, Riccardo
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142625}
}
                
@article{
10.1111:cgf.142626,
journal = {Computer Graphics Forum}, title = {{
Neural Acceleration of Scattering-Aware Color 3D Printing}},
author = {
Rittig, Tobias
 and
Sumin, Denis
 and
Babaei, Vahid
 and
Didyk, Piotr
 and
Voloboy, Alexey
 and
Wilkie, Alexander
 and
Bickel, Bernd
 and
Myszkowski, Karol
 and
Weyrich, Tim
 and
Krivánek, Jaroslav
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142626}
}
                
@article{
10.1111:cgf.142627,
journal = {Computer Graphics Forum}, title = {{
Levitating Rigid Objects with Hidden Rods and Wires}},
author = {
Kushner, Sarah
 and
Ulinski, Risa
 and
Singh, Karan
 and
Levin, David I. W.
 and
Jacobson, Alec
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142627}
}
                
@article{
10.1111:cgf.142628,
journal = {Computer Graphics Forum}, title = {{
Correlation-Aware Multiple Importance Sampling for Bidirectional Rendering Algorithms}},
author = {
Grittmann, Pascal
 and
Georgiev, Iliyan
 and
Slusallek, Philipp
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142628}
}
                
@article{
10.1111:cgf.142629,
journal = {Computer Graphics Forum}, title = {{
Cyclostationary Gaussian Noise: Theory and Synthesis}},
author = {
Lutz, Nicolas
 and
Sauvage, Basile
 and
Dischler, Jean-Michel
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142629}
}
                
@article{
10.1111:cgf.142630,
journal = {Computer Graphics Forum}, title = {{
Learning and Exploring Motor Skills with Spacetime Bounds}},
author = {
Ma, Li-Ke
 and
Yang, Zeshi
 and
Tong, Xin
 and
Guo, Baining
 and
Yin, KangKang
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142630}
}
                
@article{
10.1111:cgf.142631,
journal = {Computer Graphics Forum}, title = {{
LoBSTr: Real-time Lower-body Pose Prediction from Sparse Upper-body Tracking Signals}},
author = {
Yang, Dongseok
 and
Kim, Doyeon
 and
Lee, Sung-Hee
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142631}
}
                
@article{
10.1111:cgf.142632,
journal = {Computer Graphics Forum}, title = {{
Layout Embedding via Combinatorial Optimization}},
author = {
Born, Janis
 and
Schmidt, Patrick
 and
Kobbelt, Leif
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142632}
}
                
@article{
10.1111:cgf.142633,
journal = {Computer Graphics Forum}, title = {{
Geometric Construction of Auxetic Metamaterials}},
author = {
Bonneau, Georges-Pierre
 and
Hahmann, Stefanie
 and
Marku, Johana
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142633}
}
                
@article{
10.1111:cgf.142634,
journal = {Computer Graphics Forum}, title = {{
Quad Layouts via Constrained T-Mesh Quantization}},
author = {
Lyon, Max
 and
Campen, Marcel
 and
Kobbelt, Leif
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142634}
}
                
@article{
10.1111:cgf.142635,
journal = {Computer Graphics Forum}, title = {{
Adversarial Single-Image SVBRDF Estimation with Hybrid Training}},
author = {
Zhou, Xilong
 and
Kalantari, Nima Khademi
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142635}
}
                
@article{
10.1111:cgf.142636,
journal = {Computer Graphics Forum}, title = {{
Perceptual Quality of BRDF Approximations: Dataset and Metrics}},
author = {
Lavoué, Guillaume
 and
Bonneel, Nicolas
 and
Farrugia, Jean-Philippe
 and
Soler, Cyril
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142636}
}
                
@article{
10.1111:cgf.142637,
journal = {Computer Graphics Forum}, title = {{
Honey, I Shrunk the Domain: Frequency-aware Force Field Reduction for Efficient Fluids Optimization}},
author = {
Tang, Jingwei
 and
Azevedo, Vinicius C.
 and
Cordonnier, Guillaume
 and
Solenthaler, Barbara
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142637}
}
                
@article{
10.1111:cgf.142638,
journal = {Computer Graphics Forum}, title = {{
Two-step Temporal Interpolation Network Using Forward Advection for Efficient Smoke Simulation}},
author = {
Oh, Young Jin
 and
Lee, In-Kwon
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142638}
}
                
@article{
10.1111:cgf.142639,
journal = {Computer Graphics Forum}, title = {{
Patch Erosion for Deformable Lapped Textures on 3D Fluids}},
author = {
Gagnon, Jonathan
 and
Guzmán, Julián E.
 and
Mould, David
 and
Paquette, Eric
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142639}
}
                
@article{
10.1111:cgf.142640,
journal = {Computer Graphics Forum}, title = {{
Walk2Map: Extracting Floor Plans from Indoor Walk Trajectories}},
author = {
Mura, Claudio
 and
Pajarola, Renato
 and
Schindler, Konrad
 and
Mitra, Niloy
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142640}
}
                
@article{
10.1111:cgf.142641,
journal = {Computer Graphics Forum}, title = {{
Learning Human Search Behavior from Egocentric Visual Inputs}},
author = {
Sorokin, Maks
 and
Yu, Wenhao
 and
Ha, Sehoon
 and
Liu, C. Karen
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142641}
}
                
@article{
10.1111:cgf.142642,
journal = {Computer Graphics Forum}, title = {{
Deep Detail Enhancement for Any Garment}},
author = {
Zhang, Meng
 and
Wang, Tuanfeng
 and
Ceylan, Duygu
 and
Mitra, Niloy J.
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142642}
}
                
@article{
10.1111:cgf.142643,
journal = {Computer Graphics Forum}, title = {{
Enabling Viewpoint Learning through Dynamic Label Generation}},
author = {
Schelling, Michael
 and
Hermosilla, Pedro
 and
Vázquez, Pere-Pau
 and
Ropinski, Timo
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142643}
}
                
@article{
10.1111:cgf.142644,
journal = {Computer Graphics Forum}, title = {{
Blue Noise Plots}},
author = {
Onzenoodt, Christian van
 and
Singh, Gurprit
 and
Ropinski, Timo
 and
Ritschel, Tobias
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142644}
}
                
@article{
10.1111:cgf.142645,
journal = {Computer Graphics Forum}, title = {{
Orthogonalized Fourier Polynomials for Signal Approximation and Transfer}},
author = {
Maggioli, Filippo
 and
Melzi, Simone
 and
Ovsjanikov, Maks
 and
Bronstein, Michael M.
 and
Rodolà, Emanuele
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142645}
}
                
@article{
10.1111:cgf.142646,
journal = {Computer Graphics Forum}, title = {{
Physically-based Book Simulation with Freeform Developable Surfaces}},
author = {
Wolf, Thomas
 and
Cornillère, Victor
 and
Sorkine-Hornung, Olga
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142646}
}
                
@article{
10.1111:cgf.142647,
journal = {Computer Graphics Forum}, title = {{
Curve Complexity Heuristic KD-trees for Neighborhood-based Exploration of 3D Curves}},
author = {
Lu, Yucheng
 and
Cheng, Luyu
 and
Isenberg, Tobias
 and
Fu, Chi-Wing
 and
Chen, Guoning
 and
Liu, Hui
 and
Deussen, Oliver
 and
Wang, Yunhai
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142647}
}
                
@article{
10.1111:cgf.142648,
journal = {Computer Graphics Forum}, title = {{
SnakeBinning: Efficient Temporally Coherent Triangle Packing for Shading Streaming}},
author = {
Hladky, Jozef
 and
Seidel, Hans-Peter
 and
Steinberger, Markus
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142648}
}
                
@article{
10.1111:cgf.142649,
journal = {Computer Graphics Forum}, title = {{
Hierarchical Raster Occlusion Culling}},
author = {
Lee, Gi Beom
 and
Jeong, Moonsoo
 and
Seok, Yechan
 and
Lee, Sungkil
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142649}
}
                
@article{
10.1111:cgf.142650,
journal = {Computer Graphics Forum}, title = {{
Interactive Photo Editing on Smartphones via Intrinsic Decomposition}},
author = {
Shekhar, Sumit
 and
Reimann, Max
 and
Mayer, Maximilian
 and
Semmo, Amir
 and
Pasewaldt, Sebastian
 and
Döllner, Jürgen
 and
Trapp, Matthias
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142650}
}
                
@article{
10.1111:cgf.142651,
journal = {Computer Graphics Forum}, title = {{
RigidFusion: RGB-D Scene Reconstruction with Rigidly-moving Objects}},
author = {
Wong, Yu-Shiang
 and
Li, Changjian
 and
Nießner, Matthias
 and
Mitra, Niloy J.
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142651}
}
                
@article{
10.1111:cgf.142652,
journal = {Computer Graphics Forum}, title = {{
Spatiotemporal Texture Reconstruction for Dynamic Objects Using a Single RGB-D Camera}},
author = {
Kim, Hyomin
 and
Kim, Jungeon
 and
Nam, Hyeonseo
 and
Park, Jaesik
 and
Lee, Seungyong
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142652}
}
                
@article{
10.1111:cgf.142653,
journal = {Computer Graphics Forum}, title = {{
MultiResGNet: Approximating Nonlinear Deformation via Multi-Resolution Graphs}},
author = {
Li, Tianxing
 and
Shi, Rui
 and
Kanai, Takashi
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142653}
}
                
@article{
10.1111:cgf.142654,
journal = {Computer Graphics Forum}, title = {{
Velocity Skinning for Real-time Stylized Skeletal Animation}},
author = {
Rohmer, Damien
 and
Tarini, Marco
 and
Kalyanasundaram, Niranjan
 and
Moshfeghifar, Faezeh
 and
Cani, Marie-Paule
 and
Zordan, Victor
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142654}
}
                
@article{
10.1111:cgf.142655,
journal = {Computer Graphics Forum}, title = {{
STALP: Style Transfer with Auxiliary Limited Pairing}},
author = {
Futschik, David
 and
Kucera, Michal
 and
Lukác, Mike
 and
Wang, Zhaowen
 and
Shechtman, Eli
 and
Sýkora, Daniel
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142655}
}
                
@article{
10.1111:cgf.142656,
journal = {Computer Graphics Forum}, title = {{
Local Light Alignment for Multi-Scale Shape Depiction}},
author = {
Mestres, Nolan
 and
Vergne, Romain
 and
Noûs, Camille
 and
Thollot, Joëlle
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142656}
}
                
@article{
10.1111:cgf.142657,
journal = {Computer Graphics Forum}, title = {{
EUROGRAPHICS 2021: CGF 40-2 Frontmatter}},
author = {
Mitra, Niloy
 and
Viola, Ivan
}, year = {
2021},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.142657}
}

Browse

Recent Submissions

Now showing 1 - 48 of 48
  • Item
    Restricted Power Diagrams on the GPU
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Basselin, Justine; Alonso, Laurent; Ray, Nicolas; Sokolov, Dmitry; Lefebvre, Sylvain; Lévy, Bruno; Mitra, Niloy and Viola, Ivan
    We propose a method to simultaneously decompose a 3D object into power diagram cells and to integrate given functions in each of the obtained simple regions.We offer a novel, highly parallel algorithm that lends itself to an efficient GPU implementation. It is optimized for algorithms that need to compute many decompositions, for instance, centroidal Voronoi tesselation algorithms and incompressible fluid dynamics simulations. We propose an efficient solution that directly evaluates the integrals over every cell without computing the power diagram explicitly and without intersecting it with a tetrahedralization of the domain. Most computations are performed on the fly, without storing the power diagram. We manipulate a triangulation of the boundary of the domain (instead of tetrahedralizing the domain) to speed up the process. Moreover, the cells are treated independently one from another, making it possible to trivially scale up on a parallel architecture. Despite recent Voronoi diagram generation methods optimized for the GPU, computing integrals over restricted power diagrams still poses significant challenges; the restriction to a complex simulation domain is difficult and likely to be slow. It is not trivial to determine when a cell of a power diagram is completely computed, and the resulting integrals (e.g. the weighted Laplacian operator matrix) do not fit into fast (shared) GPU memory. We address all these issues and boost the performance of the state-of-the-art algorithms by a factor 2 to 3 for (unrestricted) Voronoi diagrams and ax50 speed-up with respect to CPU implementations for restricted power diagrams. An essential ingredient to achieve this is our new scheduling strategy that allows us to treat each Voronoi/power diagram cell with optimal settings and to benefit from the fast memory.
  • Item
    Fast Updates for Least-Squares Rotational Alignment
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Zhang, Jiayi Eris; Jacobson, Alec; Alexa, Marc; Mitra, Niloy and Viola, Ivan
    Across computer graphics, vision, robotics and simulation, many applications rely on determining the 3D rotation that aligns two objects or sets of points. The standard solution is to use singular value decomposition (SVD), where the optimal rotation is recovered as the product of the singular vectors. Faster computation of only the rotation is possible using suitable parameterizations of the rotations and iterative optimization. We propose such a method based on the Cayley transformations. The resulting optimization problem allows better local quadratic approximation compared to the Taylor approximation of the exponential map. This results in both faster convergence as well as more stable approximation compared to other iterative approaches. It also maps well to AVX vectorization. We compare our implementation with a wide range of alternatives on real and synthetic data. The results demonstrate up to two orders of magnitude of speedup compared to a straightforward SVD implementation and a 1.5-6 times speedup over popular optimized code.
  • Item
    Real-Time Frequency Adjustment of Images and Videos
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Germano, Rafael L.; Oliveira, Manuel M.; Gastal, Eduardo S. L.; Mitra, Niloy and Viola, Ivan
    We present a technique for real-time adjustment of spatial frequencies in images and videos. Our method allows for both decreasing and increasing of frequencies, and is orthogonal to image resizing. Thus, it can be used to automatically adjust spatial frequencies to preserve the appearance of structured patterns during image downscaling and upscaling. By pre-computing the image's space-frequency decomposition and its unwrapped phases, these operations can be performed in real time, thanks to our novel mathematical perspective on frequency manipulation of digital images: interpreting the problem through the theory of instantaneous frequencies and phase unwrapping. To make this possible, we introduce an algorithm for the simultaneous phase unwrapping of several unordered frequency components, which also deals with the frequency-sign ambiguity of real signals. As such, our method provides theoretical and practical improvements to the concept of spectral remapping, enabling real-time performance and improved color handling. We demonstrate its effectiveness on a large number of images subject to frequency adjustment. By providing real-time control over the spatial frequencies associated with structured patterns, our technique expands the range of creative and technical possibilities for image and video processing.
  • Item
    Coherent Mark-based Stylization of 3D Scenes at the Compositing Stage
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Garcia, Maxime; Vergne, Romain; Farhat, Mohamed-Amine; Bénard, Pierre; Noûs, Camille; Thollot, Joëlle; Mitra, Niloy and Viola, Ivan
    We present a novel temporally coherent stylized rendering technique working entirely at the compositing stage. We first generate a distribution of 3D anchor points using an implicit grid based on the local object positions stored in a G-buffer, hence following object motion. We then draw splats in screen space anchored to these points so as to be motion coherent. To increase the perceived flatness of the style, we adjust the anchor points density using a fractalization mechanism. Sudden changes are prevented by controlling the anchor points opacity and introducing a new order-independent blending function. We demonstrate the versatility of our method by showing a large variety of styles thanks to the freedom offered by the splats content and their attributes that can be controlled by any G-buffer.
  • Item
    Higher Dimensional Graphics: Conceiving Worlds in Four Spatial Dimensions and Beyond
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Cavallo, Marco; Mitra, Niloy and Viola, Ivan
    While the interpretation of high-dimensional datasets has become a necessity in most industries, the spatial visualization of higher-dimensional geometry has mostly remained a niche research topic for mathematicians and physicists. Intermittent contributions to this field date back more than a century, and have had a non-negligible influence on contemporary art and philosophy. However, most contributions have focused on the understanding of specific mathematical shapes, with few concrete applications. In this work, we attempt to revive the community's interest in visualizing higher dimensional geometry by shifting the focus from the visualization of abstract shapes to the design of a broader hyper-universe concept, wherein 3D and 4D objects can coexist and interact with each other. Specifically, we discuss the content definition, authoring patterns, and technical implementations associated with the process of extending standard 3D applications as to support 4D mechanics. We operationalize our ideas through the introduction of a new hybrid 3D/4D videogame called Across Dimensions, which we developed in Unity3D through the integration of our own 4D plugin.
  • Item
    Texture Defragmentation for Photo-Reconstructed 3D Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Maggiordomo, Andrea; Cignoni, Paolo; Tarini, Marco; Mitra, Niloy and Viola, Ivan
    We propose a method to improve an existing parametrization (UV-map layout) of a textured 3D model, targeted explicitly at alleviating typical defects afflicting models generated with automatic photo-reconstruction tools from real-world objects. This class of 3D data is becoming increasingly important thanks to the growing popularity of reliable, ready-to-use photogrammetry software packages. The resulting textured models are richly detailed, but their underlying parametrization typically falls short of many practical requirements, particularly exhibiting excessive fragmentation and consequent problems. Producing a completely new UV-map, with standard parametrization techniques, and then resampling a new texture image, is often neither practical nor desirable for at least two reasons: first, these models have characteristics (such as inconsistencies, high resolution) that make them unfit for automatic or manual parametrization; second, the required resampling leads to unnecessary signal degradation because this process is unaware of the original texel densities. In contrast, our method improves the existing UV-map instead of replacing it, balancing the reduction of the map fragmentation with signal degradation due to resampling, while also avoiding oversampling of the original signal. The proposed approach is fully automatic and extensively tested on a large benchmark of photo-reconstructed models; quantitative evaluation evidences a drastic and consistent improvement of the mappings.
  • Item
    Temporally Reliable Motion Vectors for Real-time Ray Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Zeng, Zheng; Liu, Shiqiu; Yang, Jinglei; Wang, Lu; Yan, Ling-Qi; Mitra, Niloy and Viola, Ivan
    Real-time ray tracing (RTRT) is being pervasively applied. The key to RTRT is a reliable denoising scheme that reconstructs clean images from significantly undersampled noisy inputs, usually at 1 sample per pixel as limited by current hardware's computing power. The state of the art reconstruction methods all rely on temporal filtering to find correspondences of current pixels in the previous frame, described using per-pixel screen-space motion vectors. While these approaches are demonstrated powerful, they suffer from a common issue that the temporal information cannot be used when the motion vectors are not valid, i.e. when temporal correspondences are not obviously available or do not exist in theory. We introduce temporally reliable motion vectors that aim at deeper exploration of temporal coherence, especially for the generally-believed difficult applications on shadows, glossy reflections and occlusions, with the key idea to detect and track the cause of each effect. We show that our temporally reliable motion vectors produce significantly better temporal results on a variety of dynamic scenes when compared to the state of the art methods, but with negligible performance overhead.
  • Item
    Rank-1 Lattices for Efficient Path Integral Estimation
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Liu, Hongli; Han, Honglei; Jiang, Min; Mitra, Niloy and Viola, Ivan
    We introduce rank-1 lattices as a quasi-random sequence to the numerical estimation of the high-dimensional path integral. Previous attempts at utilizing rank-1 lattices in computer graphics were very limited to low-dimensional applications, intentionally avoiding high dimensionality due to that the lattice search is NP-hard. We propose a novel framework that tackles this challenge, which was inspired by the rippling effect of the sample paths. Contrary to the conventional search approaches, our framework is based on recursively permuting the preliminarily selected components of the generator vector to achieve better pairwise projections and minimize the discrepancy of the path vertex coordinates in scene manifold spaces, resulting in improved rendering quality. It allows for the offline search of arbitrarily high-dimensional lattices to finish in a reasonable amount of time while removing the need to use all lattice points in the traditional definition, which opens the gate for their use in progressive rendering. Our rank-1 lattices successfully maintain the pixel variance at a comparable or even lower level compared to Sobol0 sampler, which offers a brand new solution to design efficient samplers for path tracing.
  • Item
    A Multiscale Microfacet Model Based on Inverse Bin Mapping
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Atanasov, Asen; Wilkie, Alexander; Koylazov, Vladimir; Krivánek, Jaroslav; Mitra, Niloy and Viola, Ivan
    Accurately controllable shading detail is a crucial aspect of realistic appearance modelling. Two fundamental building blocks for this are microfacet BRDFs, which describe the statistical behaviour of infinitely small facets, and normal maps, which provide user-controllable spatio-directional surface features. We analyse the filtering of the combined effect of a microfacet BRDF and a normal map. By partitioning the half-vector domain into bins we show that the filtering problem can be reduced to evaluation of an integral histogram (IH), a generalization of a summed-area table (SAT). Integral histograms are known for their large memory requirements, which are usually proportional to the number of bins. To alleviate this, we introduce Inverse Bin Maps, a specialised form of IH with a memory footprint that is practically independent of the number of bins. Based on these, we present a memory-efficient, production-ready approach for filtering of high resolution normal maps with arbitrary Beckmann flake roughness. In the corner case of specular normal maps (zero, or very small roughness values) our method shows similar convergence rates to the current state of the art, and is also more memory efficient.
  • Item
    Semantics-Guided Latent Space Exploration for Shape Generation
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Jahan, Tansin; Guan, Yanran; Kaick, Oliver van; Mitra, Niloy and Viola, Ivan
    We introduce an approach to incorporate user guidance into shape generation approaches based on deep networks. Generative networks such as autoencoders and generative adversarial networks are trained to encode shapes into latent vectors, effectively learning a latent shape space that can be sampled for generating new shapes. Our main idea is to enable users to explore the shape space with the use of high-level semantic keywords. Specifically, the user inputs a set of keywords that describe the general attributes of the shape to be generated, e.g., ''four legs'' for a chair. Then, our method maps the keywords to a subspace of the latent space, where the subspace captures the shapes possessing the specified attributes. The user then explores only this subspace to search for shapes that satisfy the design goal, in a process similar to using a parametric shape model. Our exploratory approach allows users to model shapes at a high level without the need for advanced artistic skills, in contrast to existing methods that allow to guide the generation with sketching or partial modeling of a shape. Our technical contribution to enable this exploration-based approach is the introduction of a label regression neural network coupled with shape encoder/decoder networks. The label regression network takes the user-provided keywords and maps them to distributions in the latent space. We show that our method allows users to explore the shape space and generate a variety of shapes with selected high-level attributes.
  • Item
    Towards a Neural Graphics Pipeline for Controllable Image Generation
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Chen, Xuelin; Cohen-Or, Daniel; Chen, Baoquan; Mitra, Niloy J.; Mitra, Niloy and Viola, Ivan
    In this paper, we leverage advances in neural networks towards forming a neural rendering for controllable image generation, and thereby bypassing the need for detailed modeling in conventional graphics pipeline. To this end, we present Neural Graphics Pipeline (NGP), a hybrid generative model that brings together neural and traditional image formation models. NGP decomposes the image into a set of interpretable appearance feature maps, uncovering direct control handles for controllable image generation. To form an image, NGP generates coarse 3D models that are fed into neural rendering modules to produce view-specific interpretable 2D maps, which are then composited into the final output image using a traditional image formation model. Our approach offers control over image generation by providing direct handles controlling illumination and camera parameters, in addition to control over shape and appearance variations. The key challenge is to learn these controls through unsupervised training that links generated coarse 3D models with unpaired real images via neural and traditional (e.g., Blinn- Phong) rendering functions, without establishing an explicit correspondence between them. We demonstrate the effectiveness of our approach on controllable image generation of single-object scenes. We evaluate our hybrid modeling framework, compare with neural-only generation methods (namely, DCGAN, LSGAN, WGAN-GP, VON, and SRNs), report improvement in FID scores against real images, and demonstrate that NGP supports direct controls common in traditional forward rendering. Code is available at http://geometry.cs.ucl.ac.uk/projects/2021/ngp.
  • Item
    Write Like You: Synthesizing Your Cursive Online Chinese Handwriting via Metric-based Meta Learning
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Tang, Shusen; Lian, Zhouhui; Mitra, Niloy and Viola, Ivan
    In this paper, we propose a novel Sequence-to-Sequence model based on metric-based meta learning for the arbitrary style transfer of online Chinese handwritings. Unlike most existing methods that treat Chinese handwritings as images and are unable to reflect the human writing process, the proposed model directly handles sequential online Chinese handwritings. Generally, our model consists of three sub-models: a content encoder, a style encoder and a decoder, which are all Recurrent Neural Networks. In order to adaptively obtain the style information, we introduce an attention-based adaptive style block which has been experimentally proven to bring considerable improvement to our model. In addition, to disentangle the latent style information from characters written by any writers effectively, we adopt metric-based meta learning and pre-train the style encoder using a carefully-designed discriminative loss function. Then, our entire model is trained in an end-to-end manner and the decoder adaptively receives the style information from the style encoder and the content information from the content encoder to synthesize the target output. Finally, by feeding the trained model with a content character and several characters written by a given user, our model can write that Chinese character in the user's handwriting style by drawing strokes one by one like humans. That is to say, as long as you write several Chinese character samples, our model can imitate your handwriting style when writing. In addition, after fine-tuning the model with a few samples, it can generate more realistic handwritings that are difficult to be distinguished from the real ones. Both qualitative and quantitative experiments demonstrate the effectiveness and superiority of our method.
  • Item
    Practical Face Reconstruction via Differentiable Ray Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Dib, Abdallah; Bharaj, Gaurav; Ahn, Junghyun; Thébault, Cédric; Gosselin, Philippe; Romeo, Marco; Chevallier, Louis; Mitra, Niloy and Viola, Ivan
    We present a differentiable ray-tracing based novel face reconstruction approach where scene attributes - 3D geometry, reflectance (diffuse, specular and roughness), pose, camera parameters, and scene illumination - are estimated from unconstrained monocular images. The proposed method models scene illumination via a novel, parameterized virtual light stage, which in-conjunction with differentiable ray-tracing, introduces a coarse-to-fine optimization formulation for face reconstruction. Our method can not only handle unconstrained illumination and self-shadows conditions, but also estimates diffuse and specular albedos. To estimate the face attributes consistently and with practical semantics, a two-stage optimization strategy systematically uses a subset of parametric attributes, where subsequent attribute estimations factor those previously estimated. For example, self-shadows estimated during the first stage, later prevent its baking into the personalized diffuse and specular albedos in the second stage. We show the efficacy of our approach in several real-world scenarios, where face attributes can be estimated even under extreme illumination conditions. Ablation studies, analyses and comparisons against several recent state-of-the-art methods show improved accuracy and versatility of our approach. With consistent face attributes reconstruction, our method leads to several style - illumination, albedo, self-shadow - edit and transfer applications, as discussed in the paper.
  • Item
    Learning Multiple-Scattering Solutions for Sphere-Tracing of Volumetric Subsurface Effects
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Leonard, Ludwic; Höhlein, Kevin; Westermann, Rüdiger; Mitra, Niloy and Viola, Ivan
    Accurate subsurface scattering solutions require the integration of optical material properties along many complicated light paths. We present a method that learns a simple geometric approximation of random paths in a homogeneous volume with translucent material. The generated representation allows determining the absorption along the path as well as a direct lighting contribution, which is representative of all scatter events along the path. A sequence of conditional variational auto-encoders (CVAEs) is trained to model the statistical distribution of the photon paths inside a spherical region in the presence of multiple scattering events. A first CVAE learns how to sample the number of scatter events, occurring on a ray path inside the sphere, which effectively determines the probability of this ray to be absorbed. Conditioned on this, a second model predicts the exit position and direction of the light particle. Finally, a third model generates a representative sample of photon position and direction along the path, which is used to approximate the contribution of direct illumination due to in-scattering. To accelerate the tracing of the light path through the volumetric medium toward the solid boundary, we employ a sphere-tracing strategy that considers the light absorption and can perform a statistically accurate next-event estimation. We demonstrate efficient learning using shallow networks of only three layers and no more than 16 nodes. In combination with a GPU shader that evaluates the CVAEs' predictions, performance gains can be demonstrated for a variety of different scenarios. We analyze the approximation error that is introduced by the data-driven scattering simulation and shed light on the major sources of error.
  • Item
    Deep HDR Estimation with Generative Detail Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Zhang, Yang; Aydin, Tunc O.; Mitra, Niloy and Viola, Ivan
    We study the problem of High Dynamic Range (HDR) image reconstruction from a Standard Dynamic Range (SDR) input with potential clipping artifacts. Instead of building a direct model that maps from SDR to HDR images as in previous work, we decompose an input SDR image into a base (low frequency) and detail layer (high frequency), and treat reconstructing these two layers as two separate problems. We propose a novel architecture that comprises individual components specially designed to handle both tasks. Specifically, our base layer reconstruction component recovers low frequency content and remaps the color gamut of the input SDR, whereas our detail layer reconstruction component, which builds upon prior work on image inpainting, hallucinates missing texture information. The output HDR prediction is produced by a final refinement stage. We present qualitative and quantitative comparisons with existing techniques where our method achieves state-of-the-art performance.
  • Item
    Automatic Surface Segmentation for Seamless Fabrication Using 4-axis Milling Machines
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Nuvoli, Stefano; Tola, Alessandro; Muntoni, Alessandro; Pietroni, Nico; Gobbetti, Enrico; Scateni, Riccardo; Mitra, Niloy and Viola, Ivan
    We introduce a novel geometry-processing pipeline to guide the fabrication of complex shapes from a single block of material using 4-axis CNC milling machines. This setup extends classical 3-axis CNC machining with an extra degree of freedom to rotate the object around a fixed axis. The first step of our pipeline identifies the rotation axis that maximizes the overall fabrication accuracy. Then we identify two height-field regions at the rotation axis's extremes used to secure the block on the rotation tool. We segment the remaining portion of the mesh into a set of height-fields whose principal directions are orthogonal to the rotation axis. The segmentation balances the approximation quality, the boundary smoothness, and the total number of patches. Additionally, the segmentation process takes into account the object's geometric features, as well as saliency information. The output is a set of meshes ready to be processed by off-the-shelf software for the 3-axis tool-path generation. We present several results to demonstrate the quality and efficiency of our approach to a range of inputs
  • Item
    Neural Acceleration of Scattering-Aware Color 3D Printing
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Rittig, Tobias; Sumin, Denis; Babaei, Vahid; Didyk, Piotr; Voloboy, Alexey; Wilkie, Alexander; Bickel, Bernd; Myszkowski, Karol; Weyrich, Tim; Krivánek, Jaroslav; Mitra, Niloy and Viola, Ivan
    With the wider availability of full-color 3D printers, color-accurate 3D-print preparation has received increased attention. A key challenge lies in the inherent translucency of commonly used print materials that blurs out details of the color texture. Previous work tries to compensate for these scattering effects through strategic assignment of colored primary materials to printer voxels. To date, the highest-quality approach uses iterative optimization that relies on computationally expensive Monte Carlo light transport simulation to predict the surface appearance from subsurface scattering within a given print material distribution; that optimization, however, takes in the order of days on a single machine. In our work, we dramatically speed up the process by replacing the light transport simulation with a data-driven approach. Leveraging a deep neural network to predict the scattering within a highly heterogeneous medium, our method performs around two orders of magnitude faster than Monte Carlo rendering while yielding optimization results of similar quality level. The network is based on an established method from atmospheric cloud rendering, adapted to our domain and extended by a physically motivated weight sharing scheme that substantially reduces the network size. We analyze its performance in an end-to-end print preparation pipeline and compare quality and runtime to alternative approaches, and demonstrate its generalization to unseen geometry and material values. This for the first time enables full heterogenous material optimization for 3D-print preparation within time frames in the order of the actual printing time.
  • Item
    Levitating Rigid Objects with Hidden Rods and Wires
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Kushner, Sarah; Ulinski, Risa; Singh, Karan; Levin, David I. W.; Jacobson, Alec; Mitra, Niloy and Viola, Ivan
    We propose a novel algorithm to efficiently generate hidden structures to support arrangements of floating rigid objects. Our optimization finds a small set of rods and wires between objects and each other or a supporting surface (e.g., wall or ceiling) that hold all objects in force and torque equilibrium. Our objective function includes a sparsity inducing total volume term and a linear visibility term based on efficiently pre-computed Monte-Carlo integration, to encourage solutions that are as-hiddenas- possible. The resulting optimization is convex and the global optimum can be efficiently recovered via a linear program. Our representation allows for a user-controllable mixture of tension-, compression-, and shear-resistant rods or tension-only wires. We explore applications to theatre set design, museum exhibit curation, and other artistic endeavours.
  • Item
    Correlation-Aware Multiple Importance Sampling for Bidirectional Rendering Algorithms
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Grittmann, Pascal; Georgiev, Iliyan; Slusallek, Philipp; Mitra, Niloy and Viola, Ivan
    Combining diverse sampling techniques via multiple importance sampling (MIS) is key to achieving robustness in modern Monte Carlo light transport simulation. Many such methods additionally employ correlated path sampling to boost efficiency. Photon mapping, bidirectional path tracing, and path-reuse algorithms construct sets of paths that share a common prefix. This correlation is ignored by classical MIS heuristics, which can result in poor technique combination and noisy images.We propose a practical and robust solution to that problem. Our idea is to incorporate correlation knowledge into the balance heuristic, based on known path densities that are already required for MIS. This correlation-aware heuristic can achieve considerably lower error than the balance heuristic, while avoiding computational and memory overhead.
  • Item
    Cyclostationary Gaussian Noise: Theory and Synthesis
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Lutz, Nicolas; Sauvage, Basile; Dischler, Jean-Michel; Mitra, Niloy and Viola, Ivan
    Stationary Gaussian processes have been used for decades in the context of procedural noises to model and synthesize textures with no spatial organization. In this paper we investigate cyclostationary Gaussian processes, whose statistics are repeated periodically. It enables the modeling of noises having periodic spatial variations, which we call "cyclostationary Gaussian noises". We adapt to the cyclostationary context several stationary noises along with their synthesis algorithms: spot noise, Gabor noise, local random-phase noise, high-performance noise, and phasor noise. We exhibit real-time synthesis of a variety of visual patterns having periodic spatial variations.
  • Item
    Learning and Exploring Motor Skills with Spacetime Bounds
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Ma, Li-Ke; Yang, Zeshi; Tong, Xin; Guo, Baining; Yin, KangKang; Mitra, Niloy and Viola, Ivan
    Equipping characters with diverse motor skills is the current bottleneck of physics-based character animation. We propose a Deep Reinforcement Learning (DRL) framework that enables physics-based characters to learn and explore motor skills from reference motions. The key insight is to use loose space-time constraints, termed spacetime bounds, to limit the search space in an early termination fashion. As we only rely on the reference to specify loose spacetime bounds, our learning is more robust with respect to low quality references. Moreover, spacetime bounds are hard constraints that improve learning of challenging motion segments, which can be ignored by imitation-only learning. We compare our method with state-of-the-art tracking-based DRL methods. We also show how to guide style exploration within the proposed framework.
  • Item
    LoBSTr: Real-time Lower-body Pose Prediction from Sparse Upper-body Tracking Signals
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Yang, Dongseok; Kim, Doyeon; Lee, Sung-Hee; Mitra, Niloy and Viola, Ivan
    With the popularization of games and VR/AR devices, there is a growing need for capturing human motion with a sparse set of tracking data. In this paper, we introduce a deep neural network (DNN) based method for real-time prediction of the lowerbody pose only from the tracking signals of the upper-body joints. Specifically, our Gated Recurrent Unit (GRU)-based recurrent architecture predicts the lower-body pose and feet contact states from a past sequence of tracking signals of the head, hands, and pelvis. A major feature of our method is that the input signal is represented by the velocity of tracking signals. We show that the velocity representation better models the correlation between the upper-body and lower-body motions and increases the robustness against the diverse scales and proportions of the user body than position-orientation representations. In addition, to remove foot-skating and floating artifacts, our network predicts feet contact state, which is used to post-process the lower-body pose with inverse kinematics to preserve the contact. Our network is lightweight so as to run in real-time applications. We show the effectiveness of our method through several quantitative evaluations against other architectures and input representations with respect to wild tracking data obtained from commercial VR devices.
  • Item
    Layout Embedding via Combinatorial Optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Born, Janis; Schmidt, Patrick; Kobbelt, Leif; Mitra, Niloy and Viola, Ivan
    We consider the problem of injectively embedding a given graph connectivity (a layout) into a target surface. Starting from prescribed positions of layout vertices, the task is to embed all layout edges as intersection-free paths on the surface. Besides merely geometric choices (the shape of paths) this problem is especially challenging due to its topological degrees of freedom (how to route paths around layout vertices). The problem is typically addressed through a sequence of shortest path insertions, ordered by a greedy heuristic. Such insertion sequences are not guaranteed to be optimal: Early path insertions can potentially force later paths into unexpected homotopy classes. We show how common greedy methods can easily produce embeddings of dramatically bad quality, rendering such methods unsuitable for automatic processing pipelines. Instead, we strive to find the optimal order of insertions, i.e. the one that minimizes the total path length of the embedding. We demonstrate that, despite the vast combinatorial solution space, this problem can be effectively solved on simply-connected domains via a custom-tailored branch-and-bound strategy. This enables directly using the resulting embeddings in downstream applications which cannot recover from initializations in a wrong homotopy class. We demonstrate the robustness of our method on a shape dataset by embedding a common template layout per category, and show applications in quad meshing and inter-surface mapping.
  • Item
    Geometric Construction of Auxetic Metamaterials
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Bonneau, Georges-Pierre; Hahmann, Stefanie; Marku, Johana; Mitra, Niloy and Viola, Ivan
    This paper is devoted to a category of metamaterials called auxetics, identified by their negative Poisson's ratio. Our work consists in exploring geometrical strategies to generate irregular auxetic structures. More precisely we seek to reduce the Poisson's ratio n, by pruning an irregular network based solely on geometric criteria. We introduce a strategy combining a pure geometric pruning algorithm followed by a physics-based testing phase to determine the resulting Poisson's ratio of our structures. We propose an algorithm that generates sets of irregular auxetic networks. Our contributions include geometrical characterization of auxetic networks, development of a pruning strategy, generation of auxetic networks with low Poisson's ratio, as well as validation of our approach.We provide statistical validation of our approach on large sets of irregular networks, and we additionally laser-cut auxetic networks in sheets of rubber. The findings reported here show that it is possible to reduce the Poisson's ratio by geometric pruning, and that we can generate irregular auxetic networks at lower processing times than a physics-based approach.
  • Item
    Quad Layouts via Constrained T-Mesh Quantization
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Lyon, Max; Campen, Marcel; Kobbelt, Leif; Mitra, Niloy and Viola, Ivan
    We present a robust and fast method for the creation of conforming quad layouts on surfaces. Our algorithm is based on the quantization of a T-mesh, i.e. an assignment of integer lengths to the sides of a non-conforming rectangular partition of the surface. This representation has the benefit of being able to encode an infinite number of layout connectivity options in a finite manner, which guarantees that a valid layout can always be found. We carefully construct the T-mesh from a given seamless parametrization such that the algorithm can provide guarantees on the results' quality. In particular, the user can specify a bound on the angular deviation of layout edges from prescribed directions. We solve an integer linear program (ILP) to find a coarse quad layout adhering to that maximal deviation. Our algorithm is guaranteed to yield a conforming quad layout free of T-junctions together with bounded angle distortion. Our results show that the presented method is fast, reliable, and achieves high quality layouts.
  • Item
    Adversarial Single-Image SVBRDF Estimation with Hybrid Training
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Zhou, Xilong; Kalantari, Nima Khademi; Mitra, Niloy and Viola, Ivan
    In this paper, we propose a deep learning approach for estimating the spatially-varying BRDFs (SVBRDF) from a single image. Most existing deep learning techniques use pixel-wise loss functions which limits the flexibility of the networks in handling this highly unconstrained problem. Moreover, since obtaining ground truth SVBRDF parameters is difficult, most methods typically train their networks on synthetic images and, therefore, do not effectively generalize to real examples. To avoid these limitations, we propose an adversarial framework to handle this application. Specifically, we estimate the material properties using an encoder-decoder convolutional neural network (CNN) and train it through a series of discriminators that distinguish the output of the network from ground truth. To address the gap in data distribution of synthetic and real images, we train our network on both synthetic and real examples. Specifically, we propose a strategy to train our network on pairs of real images of the same object with different lighting. We demonstrate that our approach is able to handle a variety of cases better than the state-of-the-art methods.
  • Item
    Perceptual Quality of BRDF Approximations: Dataset and Metrics
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Lavoué, Guillaume; Bonneel, Nicolas; Farrugia, Jean-Philippe; Soler, Cyril; Mitra, Niloy and Viola, Ivan
    Bidirectional Reflectance Distribution Functions (BRDFs) are pivotal to the perceived realism in image synthesis. While measured BRDF datasets are available, reflectance functions are most of the time approximated by analytical formulas for storage efficiency reasons. These approximations are often obtained by minimizing metrics such as L2-or weighted quadratic- distances, but these metrics do not usually correlate well with perceptual quality when the BRDF is used in a rendering context, which motivates a perceptual study. The contributions of this paper are threefold. First, we perform a large-scale user study to assess the perceptual quality of 2026 BRDF approximations, resulting in 84138 judgments across 1005 unique participants. We explore this dataset and analyze perceptual scores based on material type and illumination. Second, we assess nine analytical BRDF models in their ability to approximate tabulated BRDFs. Third, we assess several image-based and BRDF-based (Lp, optimal transport and kernel distance) metrics in their ability to approximate perceptual similarity judgments.
  • Item
    Honey, I Shrunk the Domain: Frequency-aware Force Field Reduction for Efficient Fluids Optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Tang, Jingwei; Azevedo, Vinicius C.; Cordonnier, Guillaume; Solenthaler, Barbara; Mitra, Niloy and Viola, Ivan
    Fluid control often uses optimization of control forces that are added to a simulation at each time step, such that the final animation matches a single or multiple target density keyframes provided by an artist. The optimization problem is strongly under-constrained with a high-dimensional parameter space, and finding optimal solutions is challenging, especially for higher resolution simulations. In this paper, we propose two novel ideas that jointly tackle the lack of constraints and high dimensionality of the parameter space. We first consider the fact that optimized forces are allowed to have divergent modes during the optimization process. These divergent modes are not entirely projected out by the pressure solver step, manifesting as unphysical smoke sources that are explored by the optimizer to match a desired target. Thus, we reduce the space of the possible forces to the family of strictly divergence-free velocity fields, by optimizing directly for a vector potential. We synergistically combine this with a smoothness regularization based on a spectral decomposition of control force fields. Our method enforces lower frequencies of the force fields to be optimized first by filtering force frequencies in the Fourier domain. The mask-growing strategy is inspired by Kolmogorov's theory about scales of turbulence. We demonstrate improved results for 2D and 3D fluid control especially in higher-resolution settings, while eliminating the need for manual parameter tuning. We showcase various applications of our method, where the user effectively creates or edits smoke simulations.
  • Item
    Two-step Temporal Interpolation Network Using Forward Advection for Efficient Smoke Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Oh, Young Jin; Lee, In-Kwon; Mitra, Niloy and Viola, Ivan
    In this paper, we propose a two-step temporal interpolation network using forward advection to generate smoke simulation efficiently. By converting a low frame rate smoke simulation computed with a large time step into a high frame rate smoke simulation through inference of temporal interpolation networks, the proposed method can efficiently generate smoke simulation with a high frame rate and low computational costs. The first step of the proposed method is optical flow-based temporal interpolation using deep neural networks (DNNs) for two given smoke animation frames. In the next step, we compute temporary smoke frames with forward advection, a physical computation with a low computational cost. We then interpolate between the results of the forward advection and those of the first step to generate more accurate and enhanced interpolated results. We performed quantitative analyses of the results generated by the proposed method and previous temporal interpolation methods. Furthermore, we experimentally compared the performance of the proposed method with previous methods using DNNs for smoke simulation. We found that the results generated by the proposed method are more accurate and closer to the ground truth smoke simulation than those generated by the previous temporal interpolation methods. We also confirmed that the proposed method generates smoke simulation results more efficiently with lower computational costs than previous smoke simulation methods using DNNs.
  • Item
    Patch Erosion for Deformable Lapped Textures on 3D Fluids
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Gagnon, Jonathan; Guzmán, Julián E.; Mould, David; Paquette, Eric; Mitra, Niloy and Viola, Ivan
    We propose an approach to synthesise a texture on an animated fluid free surface using a distortion metric combined with a feature map. Our approach is applied as a post-process to a fluid simulation. We advect deformable patches to move the texture along the fluid flow. The patches are covering the whole surface every frame of the animation in an overlapping fashion. Using lapped textures combined with deformable patches, we successfully remove blending artifact and rigid artifact seen in previous methods. We remain faithful to the texture exemplar by removing distorted patch texels using a patch erosion process. The patch erosion is based on a feature map provided together with the exemplar as inputs to our approach. The erosion favors removing texels toward the boundary of the patch as well as texels corresponding to more distorted regions of the patch. Where texels are removed leaving a gap on the surface, we add new patches below existing ones. The result is an animated texture following the velocity field of the fluid. We compared our results with recent work and our results show that our approach removes ghosting and temporal fading artifacts.
  • Item
    Walk2Map: Extracting Floor Plans from Indoor Walk Trajectories
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Mura, Claudio; Pajarola, Renato; Schindler, Konrad; Mitra, Niloy; Mitra, Niloy and Viola, Ivan
    Recent years have seen a proliferation of new digital products for the efficient management of indoor spaces, with important applications like emergency management, virtual property showcasing and interior design. While highly innovative and effective, these products rely on accurate 3D models of the environments considered, including information on both architectural and non-permanent elements. These models must be created from measured data such as RGB-D images or 3D point clouds, whose capture and consolidation involves lengthy data workflows. This strongly limits the rate at which 3D models can be produced, preventing the adoption of many digital services for indoor space management. We provide a radical alternative to such data-intensive procedures by presentingWalk2Map, a data-driven approach to generate floor plans only from trajectories of a person walking inside the rooms. Thanks to recent advances in data-driven inertial odometry, such minimalistic input data can be acquired from the IMU readings of consumer-level smartphones, which allows for an effortless and scalable mapping of real-world indoor spaces. Our work is based on learning the latent relation between an indoor walk trajectory and the information represented in a floor plan: interior space footprint, portals, and furniture. We distinguish between recovering area-related (interior footprint, furniture) and wall-related (doors) information and use two different neural architectures for the two tasks: an image-based Encoder-Decoder and a Graph Convolutional Network, respectively. We train our networks using scanned 3D indoor models and apply them in a cascaded fashion on an indoor walk trajectory at inference time. We perform a qualitative and quantitative evaluation using both trajectories simulated from scanned models of interiors and measured, real-world trajectories, and compare against a baseline method for image-to-image translation. The experiments confirm that our technique is viable and allows recovering reliable floor plans from minimal walk trajectory data.
  • Item
    Learning Human Search Behavior from Egocentric Visual Inputs
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Sorokin, Maks; Yu, Wenhao; Ha, Sehoon; Liu, C. Karen; Mitra, Niloy and Viola, Ivan
    ''Looking for things'' is a mundane but critical task we repeatedly carry on in our daily life. We introduce a method to develop a human character capable of searching for a randomly located target object in a detailed 3D scene using its locomotion capability and egocentric vision perception represented as RGBD images. By depriving the privileged 3D information from the human character, it is forced to move and look around simultaneously to account for the restricted sensing capability, resulting in natural navigation and search behaviors. Our method consists of two components: 1) a search control policy based on an abstract character model, and 2) an online replanning control module for synthesizing detailed kinematic motion based on the trajectories planned by the search policy. We demonstrate that the combined techniques enable the character to effectively find often occluded household items in indoor environments. The same search policy can be applied to different full body characters without the need of retraining. We evaluate our method quantitatively by testing it on randomly generated scenarios. Our work is a first step toward creating intelligent virtual agents with humanlike behaviors driven by onboard sensors, paving the road toward future robotic applications.
  • Item
    Deep Detail Enhancement for Any Garment
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Zhang, Meng; Wang, Tuanfeng; Ceylan, Duygu; Mitra, Niloy J.; Mitra, Niloy and Viola, Ivan
    Creating fine garment details requires significant efforts and huge computational resources. In contrast, a coarse shape may be easy to acquire in many scenarios (e.g., via low-resolution physically-based simulation, linear blend skinning driven by skeletal motion, portable scanners). In this paper, we show how to enhance, in a data-driven manner, rich yet plausible details starting from a coarse garment geometry. Once the parameterization of the garment is given, we formulate the task as a style transfer problem over the space of associated normal maps. In order to facilitate generalization across garment types and character motions, we introduce a patch-based formulation, that produces high-resolution details by matching a Gram matrix based style loss, to hallucinate geometric details (i.e., wrinkle density and shape). We extensively evaluate our method on a variety of production scenarios and show that our method is simple, light-weight, efficient, and generalizes across underlying garment types, sewing patterns, and body motion. Project page: http://geometry.cs.ucl.ac.uk/projects/2021/DeepDetailEnhance/
  • Item
    Enabling Viewpoint Learning through Dynamic Label Generation
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Schelling, Michael; Hermosilla, Pedro; Vázquez, Pere-Pau; Ropinski, Timo; Mitra, Niloy and Viola, Ivan
    Optimal viewpoint prediction is an essential task in many computer graphics applications. Unfortunately, common viewpoint qualities suffer from two major drawbacks: dependency on clean surface meshes, which are not always available, and the lack of closed-form expressions, which requires a costly search involving rendering. To overcome these limitations we propose to separate viewpoint selection from rendering through an end-to-end learning approach, whereby we reduce the influence of the mesh quality by predicting viewpoints from unstructured point clouds instead of polygonal meshes. While this makes our approach insensitive to the mesh discretization during evaluation, it only becomes possible when resolving label ambiguities that arise in this context. Therefore, we additionally propose to incorporate the label generation into the training procedure, making the label decision adaptive to the current network predictions. We show how our proposed approach allows for learning viewpoint predictions for models from different object categories and for different viewpoint qualities. Additionally, we show that prediction times are reduced from several minutes to a fraction of a second, as compared to state-of-the-art (SOTA) viewpoint quality evaluation. Code and training data is available at https://github.com/schellmi42/viewpoint_learning, which is to our knowledge the biggest viewpoint quality dataset available.
  • Item
    Blue Noise Plots
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Onzenoodt, Christian van; Singh, Gurprit; Ropinski, Timo; Ritschel, Tobias; Mitra, Niloy and Viola, Ivan
    We propose Blue Noise Plots, two-dimensional dot plots that depict data points of univariate data sets. While often onedimensional strip plots are used to depict such data, one of their main problems is visual clutter which results from overlap. To reduce this overlap, jitter plots were introduced, whereby an additional, non-encoding plot dimension is introduced, along which the data point representing dots are randomly perturbed. Unfortunately, this randomness can suggest non-existent clusters, and often leads to visually unappealing plots, in which overlap might still occur. To overcome these shortcomings, we introduce Blue Noise Plots where random jitter along the non-encoding plot dimension is replaced by optimizing all dots to keep a minimum distance in 2D i. e., Blue Noise. We evaluate the effectiveness as well as the aesthetics of Blue Noise Plots through both, a quantitative and a qualitative user study. The Python implementation of Blue Noise Plots is available here.
  • Item
    Orthogonalized Fourier Polynomials for Signal Approximation and Transfer
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Maggioli, Filippo; Melzi, Simone; Ovsjanikov, Maks; Bronstein, Michael M.; Rodolà, Emanuele; Mitra, Niloy and Viola, Ivan
    We propose a novel approach for the approximation and transfer of signals across 3D shapes. The proposed solution is based on taking pointwise polynomials of the Fourier-like Laplacian eigenbasis, which provides a compact and expressive representation for general signals defined on the surface. Key to our approach is the construction of a new orthonormal basis upon the set of these linearly dependent polynomials. We analyze the properties of this representation, and further provide a complete analysis of the involved parameters. Our technique results in accurate approximation and transfer of various families of signals between near-isometric and non-isometric shapes, even under poor initialization. Our experiments, showcased on a selection of downstream tasks such as filtering and detail transfer, show that our method is more robust to discretization artifacts, deformation and noise as compared to alternative approaches.
  • Item
    Physically-based Book Simulation with Freeform Developable Surfaces
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Wolf, Thomas; Cornillère, Victor; Sorkine-Hornung, Olga; Mitra, Niloy and Viola, Ivan
    Reading books or articles digitally has become accessible and widespread thanks to the large amount of affordable mobile devices and distribution platforms. However, little effort has been devoted to improving the digital book reading experience, despite studies showing disadvantages of digital text media consumption, such as diminished memory recall and enjoyment, compared to physical books. In addition, a vast amount of physical, printed books of interest exist, many of them rare and not easily physically accessible, such as out-of-print art books, first editions, or historical tomes secured in museums. Digital replicas of such books are typically either purely text based, or consist of photographed pages, where much of the essence of leafing through and experiencing the actual artifact is lost. In this work, we devise a method to recreate the experience of reading and interacting with a physical book in a digital 3D environment. Leveraging recent work on static modeling of freeform developable surfaces, which exhibit paper-like properties, we design a method for dynamic physical simulation of such surfaces, accounting for gravity and handling collisions to simulate pages in a book. We propose a mix of 2D and 3D models, specifically tailored to represent books to achieve a computationally fast simulation, running in real time on mobile devices. Our system enables users to lift, bend and flip book pages by holding them at arbitrary locations and provides a holistic interactive experience of a virtual 3D book.
  • Item
    Curve Complexity Heuristic KD-trees for Neighborhood-based Exploration of 3D Curves
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Lu, Yucheng; Cheng, Luyu; Isenberg, Tobias; Fu, Chi-Wing; Chen, Guoning; Liu, Hui; Deussen, Oliver; Wang, Yunhai; Mitra, Niloy and Viola, Ivan
    We introduce the curve complexity heuristic (CCH), a KD-tree construction strategy for 3D curves, which enables interactive exploration of neighborhoods in dense and large line datasets. It can be applied to searches of k-nearest curves (KNC) as well as radius-nearest curves (RNC). The CCH KD-tree construction consists of two steps: (i) 3D curve decomposition that takes into account curve complexity and (ii) KD-tree construction, which involves a novel splitting and early termination strategy. The obtained KD-tree allows us to improve the speed of existing neighborhood search approaches by at least an order of magnitude (i. e., 28× for KNC and 12× for RNC with 98% accuracy) by considering local curve complexity. We validate this performance with a quantitative evaluation of the quality of search results and computation time. Also, we demonstrate the usefulness of our approach for supporting various applications such as interactive line queries, line opacity optimization, and line abstraction.
  • Item
    SnakeBinning: Efficient Temporally Coherent Triangle Packing for Shading Streaming
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Hladky, Jozef; Seidel, Hans-Peter; Steinberger, Markus; Mitra, Niloy and Viola, Ivan
    Streaming rendering, e.g., rendering in the cloud and streaming via a mobile connection, suffers from increased latency and unreliable connections. High quality framerate upsampling can hide these issues, especially when capturing shading into an atlas and transmitting it alongside geometric information. The captured shading information must consider triangle footprints and temporal stability to ensure efficient video encoding. Previous approaches only consider either temporal stability or sample distributions, but none focuses on both. With SnakeBinning, we present an efficient triangle packing approach that adjusts sample distributions and caters for temporal coherence. Using a multi-dimensional binning approach, we enforce tight packing among triangles while creating optimal sample distributions. Our binning is built on top of hardware supported real-time rendering where bins are mapped to individual pixels in a virtual framebuffer. Fragment shader interlock and atomic operations enforce global ordering of triangles within each bin, and thus temporal coherence according to the primitive order is achieved. Resampling the bin distribution guarantees high occupancy among all bins and a dense atlas packing. Shading samples are directly captured into the atlas using a rasterization pass, adjusting samples for perspective effects and creating a tight packing. Comparison to previous atlas packing approaches shows that our approach is faster than previous work and achieves the best sample distributions while maintaining temporal coherence. In this way, SnakeBinning achieves the highest rendering quality under equal atlas memory requirements. At the same time, its temporal coherence ensures that we require equal or less bandwidth than previous state-of-the-art. As SnakeBinning outperforms previous approach in all relevant aspects, it is the preferred choice for texture-based streaming rendering.
  • Item
    Hierarchical Raster Occlusion Culling
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Lee, Gi Beom; Jeong, Moonsoo; Seok, Yechan; Lee, Sungkil; Mitra, Niloy and Viola, Ivan
    This paper presents a scalable online occlusion culling algorithm, which significantly improves the previous raster occlusion culling using object-level bounding volume hierarchy. Given occluders found with temporal coherence, we find and rasterize coarse groups of potential occludees in the hierarchy. Within the rasterized bounds, per-pixel ray casting tests fine-grained visibilities of every individual occludees. We further propose acceleration techniques including the read-back of counters for tightly-packed multidrawing and occluder filtering. Our solution requires only constant draw calls for batch occlusion tests, while avoiding costly iteration for hierarchy traversal. Our experiments prove our solution outperforms the existing solutions in terms of scalability, culling efficiency, and occlusion-query performance.
  • Item
    Interactive Photo Editing on Smartphones via Intrinsic Decomposition
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Shekhar, Sumit; Reimann, Max; Mayer, Maximilian; Semmo, Amir; Pasewaldt, Sebastian; Döllner, Jürgen; Trapp, Matthias; Mitra, Niloy and Viola, Ivan
    Intrinsic decomposition refers to the problem of estimating scene characteristics, such as albedo and shading, when one view or multiple views of a scene are provided. The inverse problem setting, where multiple unknowns are solved given a single known pixel-value, is highly under-constrained. When provided with correlating image and depth data, intrinsic scene decomposition can be facilitated using depth-based priors, which nowadays is easy to acquire with high-end smartphones by utilizing their depth sensors. In this work, we present a system for intrinsic decomposition of RGB-D images on smartphones and the algorithmic as well as design choices therein. Unlike state-of-the-art methods that assume only diffuse reflectance, we consider both diffuse and specular pixels. For this purpose, we present a novel specularity extraction algorithm based on a multi-scale intensity decomposition and chroma inpainting. At this, the diffuse component is further decomposed into albedo and shading components. We use an inertial proximal algorithm for non-convex optimization (iPiano) to ensure albedo sparsity. Our GPUbased visual processing is implemented on iOS via the Metal API and enables interactive performance on an iPhone 11 Pro. Further, a qualitative evaluation shows that we are able to obtain high-quality outputs. Furthermore, our proposed approach for specularity removal outperforms state-of-the-art approaches for real-world images, while our albedo and shading layer decomposition is faster than the prior work at a comparable output quality. Manifold applications such as recoloring, retexturing, relighting, appearance editing, and stylization are shown, each using the intrinsic layers obtained with our method and/or the corresponding depth data.
  • Item
    RigidFusion: RGB-D Scene Reconstruction with Rigidly-moving Objects
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Wong, Yu-Shiang; Li, Changjian; Nießner, Matthias; Mitra, Niloy J.; Mitra, Niloy and Viola, Ivan
    Although surface reconstruction from depth data has made significant advances in the recent years, handling changing environments remains a major challenge. This is unsatisfactory, as humans regularly move objects in their environments. Existing solutions focus on a restricted set of objects (e.g., those detected by semantic classifiers) possibly with template meshes, assume static camera, or mark objects touched by humans as moving. We remove these assumptions by introducing RigidFusion. Our core idea is a novel asynchronous moving-object detection method, combined with a modified volumetric fusion. This is achieved by a model-to-frame TSDF decomposition leveraging free-space carving of tracked depth values of the current frame with respect to the background model during run-time. As output, we produce separate volumetric reconstructions for the background and each moving object in the scene, along with its trajectory over time. Our method does not rely on the object priors (e.g., semantic labels or pre-scanned meshes) and is insensitive to the motion residuals between objects and the camera. In comparison to state-of-the-art methods (e.g., Co-Fusion, MaskFusion), we handle significantly more challenging reconstruction scenarios involving moving camera and improve moving-object detection (26% on the miss-detection ratio), tracking (27% on MOTA), and reconstruction (3% on the reconstruction F1) on the synthetic dataset. Please refer the supplementary and the project website for the video demonstration (geometry.cs.ucl.ac.uk/projects/2021/rigidfusion).
  • Item
    Spatiotemporal Texture Reconstruction for Dynamic Objects Using a Single RGB-D Camera
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Kim, Hyomin; Kim, Jungeon; Nam, Hyeonseo; Park, Jaesik; Lee, Seungyong; Mitra, Niloy and Viola, Ivan
    This paper presents an effective method for generating a spatiotemporal (time-varying) texture map for a dynamic object using a single RGB-D camera. The input of our framework is a 3D template model and an RGB-D image sequence. Since there are invisible areas of the object at a frame in a single-camera setup, textures of such areas need to be borrowed from other frames. We formulate the problem as an MRF optimization and define cost functions to reconstruct a plausible spatiotemporal texture for a dynamic object. Experimental results demonstrate that our spatiotemporal textures can reproduce the active appearances of captured objects better than approaches using a single texture map.
  • Item
    MultiResGNet: Approximating Nonlinear Deformation via Multi-Resolution Graphs
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Li, Tianxing; Shi, Rui; Kanai, Takashi; Mitra, Niloy and Viola, Ivan
    This paper presents a graph-learning-based, powerfully generalized method for automatically generating nonlinear deformation for characters with an arbitrary number of vertices. Large-scale character datasets with a significant number of poses are normally required for training to learn such automatic generalization tasks. There are two key contributions that enable us to address this challenge while making our network generalized to achieve realistic deformation approximation. First, after the automatic linear-based deformation step, we encode the roughly deformed meshes by constructing graphs where we propose a novel graph feature representation method with three descriptors to represent meshes of arbitrary characters in varying poses. Second, we design a multi-resolution graph network (MultiResGNet) that takes the constructed graphs as input, and end-to-end outputs the offset adjustments of each vertex. By processing multi-resolution graphs, general features can be better extracted, and the network training no longer heavily relies on large amounts of training data. Experimental results show that the proposed method achieves better performance than prior studies in deformation approximation for unseen characters and poses.
  • Item
    Velocity Skinning for Real-time Stylized Skeletal Animation
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Rohmer, Damien; Tarini, Marco; Kalyanasundaram, Niranjan; Moshfeghifar, Faezeh; Cani, Marie-Paule; Zordan, Victor; Mitra, Niloy and Viola, Ivan
    Secondary animation effects are essential for liveliness. We propose a simple, real-time solution for adding them on top of standard skinning, enabling artist-driven stylization of skeletal motion. Our method takes a standard skeleton animation as input, along with a skin mesh and rig weights. It then derives per-vertex deformations from the different linear and angular velocities along the skeletal hierarchy. We highlight two specific applications of this general framework, namely the cartoonlike ''squashy'' and ''floppy'' effects, achieved from specific combinations of velocity terms. As our results show, combining these effects enables to mimic, enhance and stylize physical-looking behaviours within a standard animation pipeline, for arbitrary skinned characters. Interactive on CPU, our method allows for GPU implementation, yielding real-time performances even on large meshes. Animator control is supported through a simple interface toolkit, enabling to refine the desired type and magnitude of deformation at relevant vertices by simply painting weights. The resulting rigged character automatically responds to new skeletal animation, without further input.
  • Item
    STALP: Style Transfer with Auxiliary Limited Pairing
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Futschik, David; Kucera, Michal; Lukác, Mike; Wang, Zhaowen; Shechtman, Eli; Sýkora, Daniel; Mitra, Niloy and Viola, Ivan
    We present an approach to example-based stylization of images that uses a single pair of a source image and its stylized counterpart. We demonstrate how to train an image translation network that can perform real-time semantically meaningful style transfer to a set of target images with similar content as the source image. A key added value of our approach is that it considers also consistency of target images during training. Although those have no stylized counterparts, we constrain the translation to keep the statistics of neural responses compatible with those extracted from the stylized source. In contrast to concurrent techniques that use a similar input, our approach better preserves important visual characteristics of the source style and can deliver temporally stable results without the need to explicitly handle temporal consistency. We demonstrate its practical utility on various applications including video stylization, style transfer to panoramas, faces, and 3D models.
  • Item
    Local Light Alignment for Multi-Scale Shape Depiction
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Mestres, Nolan; Vergne, Romain; Noûs, Camille; Thollot, Joëlle; Mitra, Niloy and Viola, Ivan
    Motivated by recent findings in the field of visual perception, we present a novel approach for enhancing shape depiction and perception of surface details. We propose a shading-based technique that relies on locally adjusting the direction of light to account for the different components of materials. Our approach ensures congruence between shape and shading flows, leading to an effective enhancement of the perception of shape and details while impairing neither the lighting nor the appearance of materials. It is formulated in a general way allowing its use for multiple scales enhancement in real-time on the GPU, as well as in global illumination contexts. We also provide artists with fine control over the enhancement at each scale.
  • Item
    EUROGRAPHICS 2021: CGF 40-2 Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Mitra, Niloy; Viola, Ivan; Mitra, Niloy and Viola, Ivan
    -