43-Issue 2

Permanent URI for this collection

Shape and Scene Understanding
Neural Semantic Surface Maps
Luca Morreale, Noam Aigerman, Vladimir G. Kim, and Niloy J. Mitra
HaLo-NeRF: Learning Geometry-Guided Semantics for Exploring Unconstrained Photo Collections
Chen Dudai, Morris Alper, Hana Bezalel, Rana Hanocka, Itai Lang, and Hadar Averbuch-Elor
Raster-to-Graph: Floorplan Recognition via Autoregressive Graph Prediction with an Attention Transformer
Sizhe Hu, Wenming Wu, Ruolin Su, Wanni Hou, Liping Zheng, and Benzhu Xu
Reflectance and Shading Models
Interactive Exploration of Vivid Material Iridescence based on Bragg Mirrors
Gary Fourneau, Romain Pacanowski, and Pascal Barla
Single-Image SVBRDF Estimation with Learned Gradient Descent
Xuejiao Luo, Leonardo Scandolo, Adrien Bousseau, and Elmar Eisemann
Procedural Modeling and Architectural Design
PossibleImpossibles: Exploratory Procedural Design of Impossible Structures
Yuanbo Li, Tianyi Ma, Zaineb Aljumayaat, and Daniel Ritchie
Hierarchical Co-generation of Parcels and Streets in Urban Modeling
Zebin Chen, Peng Song, and F. Peter Ortner
Strokes2Surface: Recovering Curve Networks From 4D Architectural Design Sketches
Shervin Rasoulzadeh, Michael Wimmer, Philipp Stauss, and Iva Kovacic
Real-time Neural Rendering
TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering
Linus Franke, Darius Rückert, Laura Fink, and Marc Stamminger
Real-time Neural Rendering of Dynamic Light Fields
Arno Coomans, Edoardo Alberto Dominici, Christian Döring, Joerg H. Mueller, Jozef Hladky, and Markus Steinberger
Real-Time Neural Materials using Block-Compressed Features
Clément Weinreich, Louis De Oliveira, Antoine Houdard, and Georges Nader
Neural 3D Shape Synthesis
SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling
Alexandre Binninger, Amir Hertz, Olga Sorkine-Hornung, Daniel Cohen-Or, and Raja Giryes
Physically-Based Lighting for 3D Generative Models of Cars
Nicolas Violante, Alban Gauthier, Stavros Diolatzis, Thomas Leimkühler, and George Drettakis
Rendering Natural Phenomena
Real-Time Underwater Spectral Rendering
Nestor Monzon, Diego Gutierrez, Derya Akkaynak, and Adolfo Muñoz
Physically Based Real-Time Rendering of Atmospheres using Mie Theory
Simon Schneegans, Tim Meyran, Ingo Ginkel, Gabriel Zachmann, and Andreas Gerndt
An Empirically Derived Adjustable Model for Particle Size Distributions in Advection Fog
Monika Kolárová, Loïc Lachiver, and Alexander Wilkie
Geometry Processing
BallMerge: High-quality Fast Surface Reconstruction via Voronoi Balls
Amal Dev Parakkat, Stefan Ohrhallinger, Elmar Eisemann, and Pooran Memari
Non-Euclidean Sliced Optimal Transport Sampling
Baptiste Genest, Nicolas Courty, and David Coeurjolly
GLS-PIA: n-Dimensional Spherical B-Spline Curve Fitting based on Geodesic Least Square with Adaptive Knot Placement
Yuming Zhao, Zhongke Wu, and Xingce Wang
Cloth Simulation
Estimating Cloth Simulation Parameters From Tag Information and Cusick Drape Test
Eunjung Ju, Kwang-yun Kim, Sungjin Yoon, Eungjune Shim, Gyoo-Chul Kang, Phil Sik Chang, and Myung Geol Choi
Neural Garment Dynamics via Manifold-Aware Transformers
Peizhuo Li, Tuanfeng Y. Wang, Timur Levent Kesdogan, Duygu Ceylan, and Olga Sorkine-Hornung
Practical Method to Estimate Fabric Mechanics from Metadata
Henar Dominguez-Elvira, Alicia Nicás, Gabriel Cirio, Alejandro Rodríguez, and Elena Garces
Meshes
Polygon Laplacian Made Robust
Astrid Bunge, Dennis R. Bukenberger, Sven Dominik Wagner, Marc Alexa, and Mario Botsch
Advancing Front Surface Mapping
Marco Livesu
Fluid Simulation
The Impulse Particle-In-Cell Method
Sergio Sancho, Jingwei Tang, Christopher Batty, and Vinicius C. Azevedo
Wavelet Potentials: An Efficient Potential Recovery Technique for Pointwise Incompressible Fluids
Luan Lyu, Xiaohua Ren, Wei Cao, Jian Zhu, Enhua Wu, and Zhi-Xin Yang
Monte Carlo Vortical Smoothed Particle Hydrodynamics for Simulating Turbulent Flows
Xingyu Ye, Xiaokun Wang, Yanrui Xu, Jiri Kosinka, Alexandru C. Telea, Lihua You, Jian Jun Zhang, and Jian Chang
Fabrication
Computational Smocking through Fabric-Thread Interaction
Ningfeng Zhou, Jing Ren, and Olga Sorkine-Hornung
Unfolding via Mesh Approximation using Surface Flows
Lars Zawallich and Renato Pajarola
Freeform Shape Fabrication by Kerfing Stiff Materials
Nils Speetzen and Leif Kobbelt
Simulating Natural Phenomena
Physically-based Analytical Erosion for fast Terrain Generation
Petros Tzathas, Boris Gailleton, Philippe Steer, and Guillaume Cordonnier
Volcanic Skies: Coupling Explosive Eruptions with Atmospheric Simulation to Create Consistent Skyscapes
Pieter C. Pretorius, James Gain, Maud Lastic, Guillaume Cordonnier, Jiong Chen, Damien Rohmer, and Marie-Paule Cani
Perceptual Rendering
Navigating the Manifold of Translucent Appearance
Dario Lanza, Belen Masia, and Adrian Jarabo
Perceptual Quality Assessment of NeRF and Neural View Synthesis Methods for Front-Facing Views
Hanxue Liang, Tianhao Wu, Param Hanji, Francesco Banterle, Hongyun Gao, Rafal Mantiuk, and Cengiz Öztireli
Predicting Perceived Gloss: Do Weak Labels Suffice?
Julia Guerrero-Viu, Jose Daniel Subias, Ana Serrano, Katherine R. Storrs, Roland W. Fleming, Belen Masia, and Diego Gutierrez
Digital Humans and Characters
TailorMe: Self-Supervised Learning of an Anatomically Constrained Volumetric Human Shape Model
Stephan Wenninger, Fabian Kemper, Ulrich Schwanecke, and Mario Botsch
CharacterMixer: Rig-Aware Interpolation of 3D Characters
Xiao Zhan, Rao Fu, and Daniel Ritchie
Stylize My Wrinkles: Bridging the Gap from Simulation to Reality
Sebastian Weiss, Jackson Stanhope, Prashanth Chandran, Gaspard Zoss, and Derek Bradley
Sampling and Image Enhancement
Enhancing Image Quality Prediction with Self-supervised Visual Masking
Ugur Çogalan, Mojtaba Bemana, Hans-Peter Seidel, and Karol Myszkowski
Enhancing Spatiotemporal Resampling with a Novel MIS Weight
Xingyue Pan, Jiaxuan Zhang, Jiancong Huang, and Ligang Liu
Neural Denoising for Deep-Z Monte Carlo Renderings
Xianyao Zhang, Gerhard Röthlin, Shilin Zhu, Tunç Ozan Aydin, Farnood Salehi, Markus Gross, and Marios Papas
Face Modeling and Reconstruction
Learning to Stabilize Faces
Jan Bednarik, Erroll Wood, Vassilis Choutas, Timo Bolkart, Daoye Wang, Chenglei Wu, and Thabo Beeler
3D Reconstruction and Semantic Modeling of Eyelashes
Glenn Kerbiriou, Quentin Avril, and Maud Marchal
ShellNeRF: Learning a Controllable High-resolution Model of the Eye and Periocular Region
Gengyan Li, Kripasindhu Sarkar, Abhimitra Meka, Marcel Buehler, Franziska Mueller, Paulo Gotardo, Otmar Hilliges, and Thabo Beeler
Vector Art and Line Drawings
Region-Aware Simplification and Stylization of 3D Line Drawings
Vivien Nguyen, Matthew Fisher, Aaron Hertzmann, and Szymon Rusinkiewicz
Vector Art and Line Drawings
FontCLIP: A Semantic Typography Visual-Language Model for Multilingual Font Applications
Yuki Tatsukawa, I-Chao Shen, Anran Qi, Yuki Koyama, Takeo Igarashi, and Ariel Shamir
Sketch Video Synthesis
Yudian Zheng, Xiaodong Cun, Menghan Xia, and Chi-Man Pun
Neural Texture and Image Synthesis
Surface-aware Mesh Texture Synthesis with Pre-trained 2D CNNs
Áron Samuel Kovács, Pedro Hermosilla, and Renata Georgia Raidou
GANtlitz: Ultra High Resolution Generative Model for Multi-Modal Face Textures
Aurel Gruber, Edo Collins, Abhimitra Meka, Franziska Mueller, Kripasindhu Sarkar, Sergio Orts-Escolano, Luca Prasso, Jay Busch, Markus Gross, and Thabo Beeler
Stylized Face Sketch Extraction via Generative Prior with Limited Data
Kwan Yun, Kwanggyoon Seo, Chang Wook Seo, Soyeon Yoon, Seongcheol Kim, Soohyun Ji, Amirsaman Ashtari, and Junyong Noh
Camera Paths and Motion Tracking
Cinematographic Camera Diffusion Model
Hongda Jiang, Xi Wang, Marc Christie, Libin Liu, and Baoquan Chen
OptFlowCam: A 3D-Image-Flow-Based Metric in Camera Space for Camera Paths in Scenes with Extreme Scale Variations
Lisa Piotrowski, Michael Motejat, Christian Rössl, and Holger Theisel
DivaTrack: Diverse Bodies and Motions from Acceleration-Enhanced 3-Point Trackers
Dongseok Yang, Jiho Kang, Lingni Ma, Joseph Greer, Yuting Ye, and Sung-Hee Lee

BibTeX (43-Issue 2)
                
@article{
10.1111:cgf.15058,
journal = {Computer Graphics Forum}, title = {{
EUROGRAPHICS 2024: CGF 43-2 Frontmatter}},
author = {
Bermano, Amit H.
 and
Kalogerakis, Evangelos
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15058}
}
                
@article{
10.1111:cgf.15005,
journal = {Computer Graphics Forum}, title = {{
Neural Semantic Surface Maps}},
author = {
Morreale, Luca
 and
Aigerman, Noam
 and
Kim, Vladimir G.
 and
Mitra, Niloy J.
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15005}
}
                
@article{
10.1111:cgf.15006,
journal = {Computer Graphics Forum}, title = {{
HaLo-NeRF: Learning Geometry-Guided Semantics for Exploring Unconstrained Photo Collections}},
author = {
Dudai, Chen
 and
Alper, Morris
 and
Bezalel, Hana
 and
Hanocka, Rana
 and
Lang, Itai
 and
Averbuch-Elor, Hadar
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15006}
}
                
@article{
10.1111:cgf.15007,
journal = {Computer Graphics Forum}, title = {{
Raster-to-Graph: Floorplan Recognition via Autoregressive Graph Prediction with an Attention Transformer}},
author = {
Hu, Sizhe
 and
Wu, Wenming
 and
Su, Ruolin
 and
Hou, Wanni
 and
Zheng, Liping
 and
Xu, Benzhu
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15007}
}
                
@article{
10.1111:cgf.15017,
journal = {Computer Graphics Forum}, title = {{
Interactive Exploration of Vivid Material Iridescence based on Bragg Mirrors}},
author = {
Fourneau, Gary
 and
Pacanowski, Romain
 and
Barla, Pascal
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15017}
}
                
@article{
10.1111:cgf.15018,
journal = {Computer Graphics Forum}, title = {{
Single-Image SVBRDF Estimation with Learned Gradient Descent}},
author = {
Luo, Xuejiao
 and
Scandolo, Leonardo
 and
Bousseau, Adrien
 and
Eisemann, Elmar
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15018}
}
                
@article{
10.1111:cgf.15052,
journal = {Computer Graphics Forum}, title = {{
PossibleImpossibles: Exploratory Procedural Design of Impossible Structures}},
author = {
Li, Yuanbo
 and
Ma, Tianyi
 and
Aljumayaat, Zaineb
 and
Ritchie, Daniel
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15052}
}
                
@article{
10.1111:cgf.15053,
journal = {Computer Graphics Forum}, title = {{
Hierarchical Co-generation of Parcels and Streets in Urban Modeling}},
author = {
Chen, Zebin
 and
Song, Peng
 and
Ortner, F. Peter
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15053}
}
                
@article{
10.1111:cgf.15054,
journal = {Computer Graphics Forum}, title = {{
Strokes2Surface: Recovering Curve Networks From 4D Architectural Design Sketches}},
author = {
Rasoulzadeh, Shervin
 and
Wimmer, Michael
 and
Stauss, Philipp
 and
Kovacic, Iva
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15054}
}
                
@article{
10.1111:cgf.15012,
journal = {Computer Graphics Forum}, title = {{
TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering}},
author = {
Franke, Linus
 and
Rückert, Darius
 and
Fink, Laura
 and
Stamminger, Marc
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15012}
}
                
@article{
10.1111:cgf.15014,
journal = {Computer Graphics Forum}, title = {{
Real-time Neural Rendering of Dynamic Light Fields}},
author = {
Coomans, Arno
 and
Dominici, Edoardo Alberto
 and
Döring, Christian
 and
Mueller, Joerg H.
 and
Hladky, Jozef
 and
Steinberger, Markus
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15014}
}
                
@article{
10.1111:cgf.15013,
journal = {Computer Graphics Forum}, title = {{
Real-Time Neural Materials using Block-Compressed Features}},
author = {
Weinreich, Clément
 and
Oliveira, Louis De
 and
Houdard, Antoine
 and
Nader, Georges
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15013}
}
                
@article{
10.1111:cgf.15015,
journal = {Computer Graphics Forum}, title = {{
SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling}},
author = {
Binninger, Alexandre
 and
Hertz, Amir
 and
Sorkine-Hornung, Olga
 and
Cohen-Or, Daniel
 and
Giryes, Raja
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15015}
}
                
@article{
10.1111:cgf.15011,
journal = {Computer Graphics Forum}, title = {{
Physically-Based Lighting for 3D Generative Models of Cars}},
author = {
Violante, Nicolas
 and
Gauthier, Alban
 and
Diolatzis, Stavros
 and
Leimkühler, Thomas
 and
Drettakis, George
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15011}
}
                
@article{
10.1111:cgf.15009,
journal = {Computer Graphics Forum}, title = {{
Real-Time Underwater Spectral Rendering}},
author = {
Monzon, Nestor
 and
Gutierrez, Diego
 and
Akkaynak, Derya
 and
Muñoz, Adolfo
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15009}
}
                
@article{
10.1111:cgf.15010,
journal = {Computer Graphics Forum}, title = {{
Physically Based Real-Time Rendering of Atmospheres using Mie Theory}},
author = {
Schneegans, Simon
 and
Meyran, Tim
 and
Ginkel, Ingo
 and
Zachmann, Gabriel
 and
Gerndt, Andreas
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15010}
}
                
@article{
10.1111:cgf.15008,
journal = {Computer Graphics Forum}, title = {{
An Empirically Derived Adjustable Model for Particle Size Distributions in Advection Fog}},
author = {
Kolárová, Monika
 and
Lachiver, Loïc
 and
Wilkie, Alexander
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15008}
}
                
@article{
10.1111:cgf.15019,
journal = {Computer Graphics Forum}, title = {{
BallMerge: High-quality Fast Surface Reconstruction via Voronoi Balls}},
author = {
Parakkat, Amal Dev
 and
Ohrhallinger, Stefan
 and
Eisemann, Elmar
 and
Memari, Pooran
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15019}
}
                
@article{
10.1111:cgf.15020,
journal = {Computer Graphics Forum}, title = {{
Non-Euclidean Sliced Optimal Transport Sampling}},
author = {
Genest, Baptiste
 and
Courty, Nicolas
 and
Coeurjolly, David
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15020}
}
                
@article{
10.1111:cgf.15021,
journal = {Computer Graphics Forum}, title = {{
GLS-PIA: n-Dimensional Spherical B-Spline Curve Fitting based on Geodesic Least Square with Adaptive Knot Placement}},
author = {
Zhao, Yuming
 and
Wu, Zhongke
 and
Wang, Xingce
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15021}
}
                
@article{
10.1111:cgf.15027,
journal = {Computer Graphics Forum}, title = {{
Estimating Cloth Simulation Parameters From Tag Information and Cusick Drape Test}},
author = {
Ju, Eunjung
 and
Kim, Kwang-yun
 and
Yoon, Sungjin
 and
Shim, Eungjune
 and
Kang, Gyoo-Chul
 and
Chang, Phil Sik
 and
Choi, Myung Geol
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15027}
}
                
@article{
10.1111:cgf.15028,
journal = {Computer Graphics Forum}, title = {{
Neural Garment Dynamics via Manifold-Aware Transformers}},
author = {
Li, Peizhuo
 and
Wang, Tuanfeng Y.
 and
Kesdogan, Timur Levent
 and
Ceylan, Duygu
 and
Sorkine-Hornung, Olga
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15028}
}
                
@article{
10.1111:cgf.15029,
journal = {Computer Graphics Forum}, title = {{
Practical Method to Estimate Fabric Mechanics from Metadata}},
author = {
Dominguez-Elvira, Henar
 and
Nicás, Alicia
 and
Cirio, Gabriel
 and
Rodríguez, Alejandro
 and
Garces, Elena
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15029}
}
                
@article{
10.1111:cgf.15025,
journal = {Computer Graphics Forum}, title = {{
Polygon Laplacian Made Robust}},
author = {
Bunge, Astrid
 and
Bukenberger, Dennis R.
 and
Wagner, Sven Dominik
 and
Alexa, Marc
 and
Botsch, Mario
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15025}
}
                
@article{
10.1111:cgf.15026,
journal = {Computer Graphics Forum}, title = {{
Advancing Front Surface Mapping}},
author = {
Livesu, Marco
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15026}
}
                
@article{
10.1111:cgf.15022,
journal = {Computer Graphics Forum}, title = {{
The Impulse Particle-In-Cell Method}},
author = {
Sancho, Sergio
 and
Tang, Jingwei
 and
Batty, Christopher
 and
Azevedo, Vinicius C.
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15022}
}
                
@article{
10.1111:cgf.15023,
journal = {Computer Graphics Forum}, title = {{
Wavelet Potentials: An Efficient Potential Recovery Technique for Pointwise Incompressible Fluids}},
author = {
Lyu, Luan
 and
Ren, Xiaohua
 and
Cao, Wei
 and
Zhu, Jian
 and
Wu, Enhua
 and
Yang, Zhi-Xin
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15023}
}
                
@article{
10.1111:cgf.15024,
journal = {Computer Graphics Forum}, title = {{
Monte Carlo Vortical Smoothed Particle Hydrodynamics for Simulating Turbulent Flows}},
author = {
Ye, Xingyu
 and
Wang, Xiaokun
 and
Xu, Yanrui
 and
Kosinka, Jiri
 and
Telea, Alexandru C.
 and
You, Lihua
 and
Zhang, Jian Jun
 and
Chang, Jian
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15024}
}
                
@article{
10.1111:cgf.15030,
journal = {Computer Graphics Forum}, title = {{
Computational Smocking through Fabric-Thread Interaction}},
author = {
Zhou, Ningfeng
 and
Ren, Jing
 and
Sorkine-Hornung, Olga
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15030}
}
                
@article{
10.1111:cgf.15031,
journal = {Computer Graphics Forum}, title = {{
Unfolding via Mesh Approximation using Surface Flows}},
author = {
Zawallich, Lars
 and
Pajarola, Renato
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15031}
}
                
@article{
10.1111:cgf.15032,
journal = {Computer Graphics Forum}, title = {{
Freeform Shape Fabrication by Kerfing Stiff Materials}},
author = {
Speetzen, Nils
 and
Kobbelt, Leif
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15032}
}
                
@article{
10.1111:cgf.15033,
journal = {Computer Graphics Forum}, title = {{
Physically-based Analytical Erosion for fast Terrain Generation}},
author = {
Tzathas, Petros
 and
Gailleton, Boris
 and
Steer, Philippe
 and
Cordonnier, Guillaume
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15033}
}
                
@article{
10.1111:cgf.15034,
journal = {Computer Graphics Forum}, title = {{
Volcanic Skies: Coupling Explosive Eruptions with Atmospheric Simulation to Create Consistent Skyscapes}},
author = {
Pretorius, Pieter C.
 and
Gain, James
 and
Lastic, Maud
 and
Cordonnier, Guillaume
 and
Chen, Jiong
 and
Rohmer, Damien
 and
Cani, Marie-Paule
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15034}
}
                
@article{
10.1111:cgf.15035,
journal = {Computer Graphics Forum}, title = {{
Navigating the Manifold of Translucent Appearance}},
author = {
Lanza, Dario
 and
Masia, Belen
 and
Jarabo, Adrian
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15035}
}
                
@article{
10.1111:cgf.15036,
journal = {Computer Graphics Forum}, title = {{
Perceptual Quality Assessment of NeRF and Neural View Synthesis Methods for Front-Facing Views}},
author = {
Liang, Hanxue
 and
Wu, Tianhao
 and
Hanji, Param
 and
Banterle, Francesco
 and
Gao, Hongyun
 and
Mantiuk, Rafal
 and
Öztireli, Cengiz
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15036}
}
                
@article{
10.1111:cgf.15037,
journal = {Computer Graphics Forum}, title = {{
Predicting Perceived Gloss: Do Weak Labels Suffice?}},
author = {
Guerrero-Viu, Julia
 and
Subias, Jose Daniel
 and
Serrano, Ana
 and
Storrs, Katherine R.
 and
Fleming, Roland W.
 and
Masia, Belen
 and
Gutierrez, Diego
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15037}
}
                
@article{
10.1111:cgf.15046,
journal = {Computer Graphics Forum}, title = {{
TailorMe: Self-Supervised Learning of an Anatomically Constrained Volumetric Human Shape Model}},
author = {
Wenninger, Stephan
 and
Kemper, Fabian
 and
Schwanecke, Ulrich
 and
Botsch, Mario
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15046}
}
                
@article{
10.1111:cgf.15047,
journal = {Computer Graphics Forum}, title = {{
CharacterMixer: Rig-Aware Interpolation of 3D Characters}},
author = {
Zhan, Xiao
 and
Fu, Rao
 and
Ritchie, Daniel
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15047}
}
                
@article{
10.1111:cgf.15048,
journal = {Computer Graphics Forum}, title = {{
Stylize My Wrinkles: Bridging the Gap from Simulation to Reality}},
author = {
Weiss, Sebastian
 and
Stanhope, Jackson
 and
Chandran, Prashanth
 and
Zoss, Gaspard
 and
Bradley, Derek
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15048}
}
                
@article{
10.1111:cgf.15051,
journal = {Computer Graphics Forum}, title = {{
Enhancing Image Quality Prediction with Self-supervised Visual Masking}},
author = {
Çogalan, Ugur
 and
Bemana, Mojtaba
 and
Seidel, Hans-Peter
 and
Myszkowski, Karol
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15051}
}
                
@article{
10.1111:cgf.15049,
journal = {Computer Graphics Forum}, title = {{
Enhancing Spatiotemporal Resampling with a Novel MIS Weight}},
author = {
Pan, Xingyue
 and
Zhang, Jiaxuan
 and
Huang, Jiancong
 and
Liu, Ligang
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15049}
}
                
@article{
10.1111:cgf.15050,
journal = {Computer Graphics Forum}, title = {{
Neural Denoising for Deep-Z Monte Carlo Renderings}},
author = {
Zhang, Xianyao
 and
Röthlin, Gerhard
 and
Zhu, Shilin
 and
Aydin, Tunç Ozan
 and
Salehi, Farnood
 and
Gross, Markus
 and
Papas, Marios
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15050}
}
                
@article{
10.1111:cgf.15038,
journal = {Computer Graphics Forum}, title = {{
Learning to Stabilize Faces}},
author = {
Bednarik, Jan
 and
Wood, Erroll
 and
Choutas, Vassilis
 and
Bolkart, Timo
 and
Wang, Daoye
 and
Wu, Chenglei
 and
Beeler, Thabo
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15038}
}
                
@article{
10.1111:cgf.15040,
journal = {Computer Graphics Forum}, title = {{
3D Reconstruction and Semantic Modeling of Eyelashes}},
author = {
Kerbiriou, Glenn
 and
Avril, Quentin
 and
Marchal, Maud
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15040}
}
                
@article{
10.1111:cgf.15041,
journal = {Computer Graphics Forum}, title = {{
ShellNeRF: Learning a Controllable High-resolution Model of the Eye and Periocular Region}},
author = {
Li, Gengyan
 and
Sarkar, Kripasindhu
 and
Meka, Abhimitra
 and
Buehler, Marcel
 and
Mueller, Franziska
 and
Gotardo, Paulo
 and
Hilliges, Otmar
 and
Beeler, Thabo
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15041}
}
                
@article{
10.1111:cgf.15042,
journal = {Computer Graphics Forum}, title = {{
Region-Aware Simplification and Stylization of 3D Line Drawings}},
author = {
Nguyen, Vivien
 and
Fisher, Matthew
 and
Hertzmann, Aaron
 and
Rusinkiewicz, Szymon
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15042}
}
                
@article{
10.1111:cgf.15043,
journal = {Computer Graphics Forum}, title = {{
FontCLIP: A Semantic Typography Visual-Language Model for Multilingual Font Applications}},
author = {
Tatsukawa, Yuki
 and
Shen, I-Chao
 and
Qi, Anran
 and
Koyama, Yuki
 and
Igarashi, Takeo
 and
Shamir, Ariel
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15043}
}
                
@article{
10.1111:cgf.15044,
journal = {Computer Graphics Forum}, title = {{
Sketch Video Synthesis}},
author = {
Zheng, Yudian
 and
Cun, Xiaodong
 and
Xia, Menghan
 and
Pun, Chi-Man
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15044}
}
                
@article{
10.1111:cgf.15016,
journal = {Computer Graphics Forum}, title = {{
Surface-aware Mesh Texture Synthesis with Pre-trained 2D CNNs}},
author = {
Kovács, Áron Samuel
 and
Hermosilla, Pedro
 and
Raidou, Renata Georgia
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15016}
}
                
@article{
10.1111:cgf.15039,
journal = {Computer Graphics Forum}, title = {{
GANtlitz: Ultra High Resolution Generative Model for Multi-Modal Face Textures}},
author = {
Gruber, Aurel
 and
Collins, Edo
 and
Meka, Abhimitra
 and
Mueller, Franziska
 and
Sarkar, Kripasindhu
 and
Orts-Escolano, Sergio
 and
Prasso, Luca
 and
Busch, Jay
 and
Gross, Markus
 and
Beeler, Thabo
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15039}
}
                
@article{
10.1111:cgf.15045,
journal = {Computer Graphics Forum}, title = {{
Stylized Face Sketch Extraction via Generative Prior with Limited Data}},
author = {
Yun, Kwan
 and
Seo, Kwanggyoon
 and
Seo, Chang Wook
 and
Yoon, Soyeon
 and
Kim, Seongcheol
 and
Ji, Soohyun
 and
Ashtari, Amirsaman
 and
Noh, Junyong
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15045}
}
                
@article{
10.1111:cgf.15055,
journal = {Computer Graphics Forum}, title = {{
Cinematographic Camera Diffusion Model}},
author = {
Jiang, Hongda
 and
Wang, Xi
 and
Christie, Marc
 and
Liu, Libin
 and
Chen, Baoquan
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15055}
}
                
@article{
10.1111:cgf.15056,
journal = {Computer Graphics Forum}, title = {{
OptFlowCam: A 3D-Image-Flow-Based Metric in Camera Space for Camera Paths in Scenes with Extreme Scale Variations}},
author = {
Piotrowski, Lisa
 and
Motejat, Michael
 and
Rössl, Christian
 and
Theisel, Holger
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15056}
}
                
@article{
10.1111:cgf.15057,
journal = {Computer Graphics Forum}, title = {{
DivaTrack: Diverse Bodies and Motions from Acceleration-Enhanced 3-Point Trackers}},
author = {
Yang, Dongseok
 and
Kang, Jiho
 and
Ma, Lingni
 and
Greer, Joseph
 and
Ye, Yuting
 and
Lee, Sung-Hee
}, year = {
2024},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.15057}
}

Browse

Recent Submissions

Now showing 1 - 54 of 54
  • Item
    EUROGRAPHICS 2024: CGF 43-2 Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Bermano, Amit H.; Kalogerakis, Evangelos; Bermano, Amit H.; Kalogerakis, Evangelos
  • Item
    Neural Semantic Surface Maps
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Morreale, Luca; Aigerman, Noam; Kim, Vladimir G.; Mitra, Niloy J.; Bermano, Amit H.; Kalogerakis, Evangelos
    We present an automated technique for computing a map between two genus-zero shapes, which matches semantically corresponding regions to one another. Lack of annotated data prohibits direct inference of 3D semantic priors; instead, current state-of-the-art methods predominantly optimize geometric properties or require varying amounts of manual annotation. To overcome the lack of annotated training data, we distill semantic matches from pre-trained vision models: our method renders the pair of untextured 3D shapes from multiple viewpoints; the resulting renders are then fed into an off-the-shelf imagematching strategy that leverages a pre-trained visual model to produce feature points. This yields semantic correspondences, which are projected back to the 3D shapes, producing a raw matching that is inaccurate and inconsistent across different viewpoints. These correspondences are refined and distilled into an inter-surface map by a dedicated optimization scheme, which promotes bijectivity and continuity of the output map. We illustrate that our approach can generate semantic surface-to-surface maps, eliminating manual annotations or any 3D training data requirement. Furthermore, it proves effective in scenarios with high semantic complexity, where objects are non-isometrically related, as well as in situations where they are nearly isometric.
  • Item
    HaLo-NeRF: Learning Geometry-Guided Semantics for Exploring Unconstrained Photo Collections
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Dudai, Chen; Alper, Morris; Bezalel, Hana; Hanocka, Rana; Lang, Itai; Averbuch-Elor, Hadar; Bermano, Amit H.; Kalogerakis, Evangelos
    Internet image collections containing photos captured by crowds of photographers show promise for enabling digital exploration of large-scale tourist landmarks. However, prior works focus primarily on geometric reconstruction and visualization, neglecting the key role of language in providing a semantic interface for navigation and fine-grained understanding. In more constrained 3D domains, recent methods have leveraged modern vision-and-language models as a strong prior of 2D visual semantics. While these models display an excellent understanding of broad visual semantics, they struggle with unconstrained photo collections depicting such tourist landmarks, as they lack expert knowledge of the architectural domain and fail to exploit the geometric consistency of images capturing multiple views of such scenes. In this work, we present a localization system that connects neural representations of scenes depicting large-scale landmarks with text describing a semantic region within the scene, by harnessing the power of SOTA vision-and-language models with adaptations for understanding landmark scene semantics. To bolster such models with fine-grained knowledge, we leverage large-scale Internet data containing images of similar landmarks along with weakly-related textual information. Our approach is built upon the premise that images physically grounded in space can provide a powerful supervision signal for localizing new concepts, whose semantics may be unlocked from Internet textual metadata with large language models. We use correspondences between views of scenes to bootstrap spatial understanding of these semantics, providing guidance for 3D-compatible segmentation that ultimately lifts to a volumetric scene representation. To evaluate our method, we present a new benchmark dataset containing large-scale scenes with groundtruth segmentations for multiple semantic concepts. Our results show that HaLo-NeRF can accurately localize a variety of semantic concepts related to architectural landmarks, surpassing the results of other 3D models as well as strong 2D segmentation baselines. Our code and data are publicly available at https://tau-vailab.github.io/HaLo-NeRF/.
  • Item
    Raster-to-Graph: Floorplan Recognition via Autoregressive Graph Prediction with an Attention Transformer
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Hu, Sizhe; Wu, Wenming; Su, Ruolin; Hou, Wanni; Zheng, Liping; Xu, Benzhu; Bermano, Amit H.; Kalogerakis, Evangelos
    Recognizing the detailed information embedded in rasterized floorplans is at the research forefront in the community of computer graphics and vision. With the advent of deep neural networks, automatic floorplan recognition has made tremendous breakthroughs. However, co-recognizing both the structures and semantics of floorplans through one neural network remains a significant challenge. In this paper, we introduce a novel framework Raster-to-Graph, which automatically achieves structural and semantic recognition of floorplans.We represent vectorized floorplans as structural graphs embedded with floorplan semantics, thus transforming the floorplan recognition task into a structural graph prediction problem. We design an autoregressive prediction framework using the neural network architecture of the visual attention Transformer, iteratively predicting the wall junctions and wall segments of floorplans in the order of graph traversal. Additionally, we propose a large-scale floorplan dataset containing over 10,000 real-world residential floorplans. Our autoregressive framework can automatically recognize the structures and semantics of floorplans. Extensive experiments demonstrate the effectiveness of our framework, showing significant improvements on all metrics. Qualitative and quantitative evaluations indicate that our framework outperforms existing state-of-the-art methods. Code and dataset for this paper are available at: https://github.com/HSZVIS/Raster-to-Graph.
  • Item
    Interactive Exploration of Vivid Material Iridescence based on Bragg Mirrors
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Fourneau, Gary; Pacanowski, Romain; Barla, Pascal; Bermano, Amit H.; Kalogerakis, Evangelos
    Many animals, plants or gems exhibit iridescent material appearance in nature. These are due to specific geometric structures at scales comparable to visible wavelengths, yielding so-called structural colors. The most vivid examples are due to photonic crystals, where a same structure is repeated in one, two or three dimensions, augmenting the magnitude and complexity of interference effects. In this paper, we study the appearance of 1D photonic crystals (repetitive pairs of thin films), also called Bragg mirrors. Previous work has considered the effect of multiple thin films using the classical transfer matrix approach, which increases in complexity when the number of repetitions increases. Our first contribution is to introduce a more efficient closedform reflectance formula [Yeh88] for Bragg mirror reflectance to the Graphics community, as well as an approximation that lends itself to efficient spectral integration for RGB rendering. We then explore the appearance of stacks made of rough Bragg layers. Here our contribution is to show that they may lead to a ballistic transmission, significantly speeding up position-free rendering and leading to an efficient single-reflection BRDF model.
  • Item
    Single-Image SVBRDF Estimation with Learned Gradient Descent
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Luo, Xuejiao; Scandolo, Leonardo; Bousseau, Adrien; Eisemann, Elmar; Bermano, Amit H.; Kalogerakis, Evangelos
    Recovering spatially-varying materials from a single photograph of a surface is inherently ill-posed, making the direct application of a gradient descent on the reflectance parameters prone to poor minima. Recent methods leverage deep learning either by directly regressing reflectance parameters using feed-forward neural networks or by learning a latent space of SVBRDFs using encoder-decoder or generative adversarial networks followed by a gradient-based optimization in latent space. The former is fast but does not account for the likelihood of the prediction, i.e., how well the resulting reflectance explains the input image. The latter provides a strong prior on the space of spatially-varying materials, but this prior can hinder the reconstruction of images that are too different from the training data. Our method combines the strengths of both approaches. We optimize reflectance parameters to best reconstruct the input image using a recurrent neural network, which iteratively predicts how to update the reflectance parameters given the gradient of the reconstruction likelihood. By combining a learned prior with a likelihood measure, our approach provides a maximum a posteriori estimate of the SVBRDF. Our evaluation shows that this learned gradient-descent method achieves state-of-the-art performance for SVBRDF estimation on synthetic and real images.
  • Item
    PossibleImpossibles: Exploratory Procedural Design of Impossible Structures
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Li, Yuanbo; Ma, Tianyi; Aljumayaat, Zaineb; Ritchie, Daniel; Bermano, Amit H.; Kalogerakis, Evangelos
    We present a method for generating structures in three-dimensional space that appear to be impossible when viewed from specific perspectives. Previous approaches focus on helping users to edit specific structures and require users to have knowledge of structural positioning causing the impossibility. On the contrary, our system is designed to aid users without prior knowledge to explore a wide range of potentially impossible structures. The essence of our method lies in features we call visual bridges that confuse viewers regarding the depth of the resulting structure. We use these features as starting points and employ procedural modeling to systematically generate the result. We propose scoring functions for enforcing desirable spatial arrangement of the result and use Sequential Monte Carlo to sample outputs that score well under these functions. We also present a proof-ofconcept user interface and demonstrate various results generated using our system.
  • Item
    Hierarchical Co-generation of Parcels and Streets in Urban Modeling
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Chen, Zebin; Song, Peng; Ortner, F. Peter; Bermano, Amit H.; Kalogerakis, Evangelos
    We present a computational framework for modeling land parcels and streets. In the real world, parcels and streets are highly coupled with each other since a street network connects all the parcels in a certain area. However, existing works model parcels and streets separately to simplify the problem, resulting in urban layouts with irregular parcels and/or suboptimal streets. In this paper, we propose a hierarchical approach to co-generate parcels and streets from a user-specified polygonal land shape, guided by a set of fundamental urban design requirements. At each hierarchical level, new parcels are generated based on binary splitting of existing parcels, and new streets are subsequently generated by leveraging efficient graph search tools to ensure that each new parcel has a street access. At the end, we optimize the geometry of the generated parcels and streets to further improve their geometric quality. Our computational framework outputs an urban layout with a desired number of regular parcels that are reachable via a connected street network, for which users are allowed to control the modeling process both locally and globally. Quantitative comparisons with state-of-the-art approaches show that our framework is able to generate parcels and streets that are superior in some aspects.
  • Item
    Strokes2Surface: Recovering Curve Networks From 4D Architectural Design Sketches
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Rasoulzadeh, Shervin; Wimmer, Michael; Stauss, Philipp; Kovacic, Iva; Bermano, Amit H.; Kalogerakis, Evangelos
    We present Strokes2Surface, an offline geometry reconstruction pipeline that recovers well-connected curve networks from imprecise 4D sketches to bridge concept design and digital modeling stages in architectural design. The input to our pipeline consists of 3D strokes' polyline vertices and their timestamps as the 4th dimension, along with additional metadata recorded throughout sketching. Inspired by architectural sketching practices, our pipeline combines a classifier and two clustering models to achieve its goal. First, with a set of extracted hand-engineered features from the sketch, the classifier recognizes the type of individual strokes between those depicting boundaries (Shape strokes) and those depicting enclosed areas (Scribble strokes). Next, the two clustering models parse strokes of each type into distinct groups, each representing an individual edge or face of the intended architectural object. Curve networks are then formed through topology recovery of consolidated Shape clusters and surfaced using Scribble clusters guiding the cycle discovery. Our evaluation is threefold: We confirm the usability of the Strokes2Surface pipeline in architectural design use cases via a user study, we validate our choice of features via statistical analysis and ablation studies on our collected dataset, and we compare our outputs against a range of reconstructions computed using alternative methods.
  • Item
    TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Franke, Linus; Rückert, Darius; Fink, Laura; Stamminger, Marc; Bermano, Amit H.; Kalogerakis, Evangelos
    Point-based radiance field rendering has demonstrated impressive results for novel view synthesis, offering a compelling blend of rendering quality and computational efficiency. However, also latest approaches in this domain are not without their shortcomings. 3D Gaussian Splatting [KKLD23] struggles when tasked with rendering highly detailed scenes, due to blurring and cloudy artifacts. On the other hand, ADOP [RFS22] can accommodate crisper images, but the neural reconstruction network decreases performance, it grapples with temporal instability and it is unable to effectively address large gaps in the point cloud. In this paper, we present TRIPS (Trilinear Point Splatting), an approach that combines ideas from both Gaussian Splatting and ADOP. The fundamental concept behind our novel technique involves rasterizing points into a screen-space image pyramid, with the selection of the pyramid layer determined by the projected point size. This approach allows rendering arbitrarily large points using a single trilinear write. A lightweight neural network is then used to reconstruct a hole-free image including detail beyond splat resolution. Importantly, our render pipeline is entirely differentiable, allowing for automatic optimization of both point sizes and positions. Our evaluation demonstrate that TRIPS surpasses existing state-of-the-art methods in terms of rendering quality while maintaining a real-time frame rate of 60 frames per second on readily available hardware. This performance extends to challenging scenarios, such as scenes featuring intricate geometry, expansive landscapes, and auto-exposed footage. The project page is located at: https://lfranke.github.io/trips
  • Item
    Real-time Neural Rendering of Dynamic Light Fields
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Coomans, Arno; Dominici, Edoardo Alberto; Döring, Christian; Mueller, Joerg H.; Hladky, Jozef; Steinberger, Markus; Bermano, Amit H.; Kalogerakis, Evangelos
    Synthesising high-quality views of dynamic scenes via path tracing is prohibitively expensive. Although caching offline-quality global illumination in neural networks alleviates this issue, existing neural view synthesis methods are limited to mainly static scenes, have low inference performance or do not integrate well with existing rendering paradigms. We propose a novel neural method that is able to capture a dynamic light field, renders at real-time frame rates at 1920x1080 resolution and integrates seamlessly with Monte Carlo ray tracing frameworks. We demonstrate how a combination of spatial, temporal and a novel surface-space encoding are each effective at capturing different kinds of spatio-temporal signals. Together with a compact fully-fused neural network and architectural improvements, we achieve a twenty-fold increase in network inference speed compared to related methods at equal or better quality. Our approach is suitable for providing offline-quality real-time rendering in a variety of scenarios, such as free-viewpoint video, interactive multi-view rendering, or streaming rendering. Finally, our work can be integrated into other rendering paradigms, e.g., providing a dynamic background for interactive scenarios where the foreground is rendered with traditional methods.
  • Item
    Real-Time Neural Materials using Block-Compressed Features
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Weinreich, Clément; Oliveira, Louis De; Houdard, Antoine; Nader, Georges; Bermano, Amit H.; Kalogerakis, Evangelos
    Neural materials typically consist of a collection of neural features along with a decoder network. The main challenge in integrating such models in real-time rendering pipelines lies in the large size required to store their features in GPU memory and the complexity of evaluating the network efficiently. We present a neural material model whose features and decoder are specifically designed to be used in real-time rendering pipelines. Our framework leverages hardware-based block compression (BC) texture formats to store the learned features and trains the model to output the material information continuously in space and scale. To achieve this, we organize the features in a block-based manner and emulate BC6 decompression during training, making it possible to export them as regular BC6 textures. This structure allows us to use high resolution features while maintaining a low memory footprint. Consequently, this enhances our model's overall capability, enabling the use of a lightweight and simple decoder architecture that can be evaluated directly in a shader. Furthermore, since the learned features can be decoded continuously, it allows for random uv sampling and smooth transition between scales without needing any subsequent filtering. As a result, our neural material has a small memory footprint, can be decoded extremely fast adding a minimal computational overhead to the rendering pipeline.
  • Item
    SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Binninger, Alexandre; Hertz, Amir; Sorkine-Hornung, Olga; Cohen-Or, Daniel; Giryes, Raja; Bermano, Amit H.; Kalogerakis, Evangelos
    We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches, including those of abstract nature. Our method allows users to quickly and easily sketch a shape, and then maps the sketch into the latent space of a partaware neural implicit shape architecture. SENS analyzes the sketch and encodes its parts into ViT patch encoding, subsequently feeding them into a transformer decoder that converts them to shape embeddings suitable for editing 3D neural implicit shapes. SENS provides intuitive sketch-based generation and editing, and also succeeds in capturing the intent of the user's sketch to generate a variety of novel and expressive 3D shapes, even from abstract and imprecise sketches. Additionally, SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal. It also offers part-based modeling capabilities, enabling the combination of features from multiple sketches to create more complex and customized 3D shapes. We demonstrate the effectiveness of our model compared to the state-of-the-art using objective metric evaluation criteria and a user study, both indicating strong performance on sketches with a medium level of abstraction. Furthermore, we showcase our method's intuitive sketch-based shape editing capabilities, and validate it through a usability study.
  • Item
    Physically-Based Lighting for 3D Generative Models of Cars
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Violante, Nicolas; Gauthier, Alban; Diolatzis, Stavros; Leimkühler, Thomas; Drettakis, George; Bermano, Amit H.; Kalogerakis, Evangelos
    Recent work has demonstrated that Generative Adversarial Networks (GANs) can be trained to generate 3D content from 2D image collections, by synthesizing features for neural radiance field rendering. However, most such solutions generate radiance, with lighting entangled with materials. This results in unrealistic appearance, since lighting cannot be changed and view-dependent effects such as reflections do not move correctly with the viewpoint. In addition, many methods have difficulty for full, 360? rotations, since they are often designed for mainly front-facing scenes such as faces. We introduce a new 3D GAN framework that addresses these shortcomings, allowing multi-view coherent 360? viewing and at the same time relighting for objects with shiny reflections, which we exemplify using a car dataset. The success of our solution stems from three main contributions. First, we estimate initial camera poses for a dataset of car images, and then learn to refine the distribution of camera parameters while training the GAN. Second, we propose an efficient Image-Based Lighting model, that we use in a 3D GAN to generate disentangled reflectance, as opposed to the radiance synthesized in most previous work. The material is used for physically-based rendering with a dataset of environment maps. Third, we improve the 3D GAN architecture compared to previous work and design a careful training strategy that allows effective disentanglement. Our model is the first that generate a variety of 3D cars that are multi-view consistent and that can be relit interactively with any environment map.
  • Item
    Real-Time Underwater Spectral Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Monzon, Nestor; Gutierrez, Diego; Akkaynak, Derya; Muñoz, Adolfo; Bermano, Amit H.; Kalogerakis, Evangelos
    The light field in an underwater environment is characterized by complex multiple scattering interactions and wavelengthdependent attenuation, requiring significant computational resources for the simulation of underwater scenes. We present a novel approach that makes it possible to simulate multi-spectral underwater scenes, in a physically-based manner, in real time. Our key observation is the following: In the vertical direction, the steady decay in irradiance as a function of depth is characterized by the diffuse downwelling attenuation coefficient, which oceanographers routinely measure for different types of waters. We rely on a database of such real-world measurements to obtain an analytical approximation to the Radiative Transfer Equation, allowing for real-time spectral rendering with results comparable to Monte Carlo ground-truth references, in a fraction of the time. We show results simulating underwater appearance for the different optical water types, including volumetric shadows and dynamic, spatially varying lighting near the water surface.
  • Item
    Physically Based Real-Time Rendering of Atmospheres using Mie Theory
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Schneegans, Simon; Meyran, Tim; Ginkel, Ingo; Zachmann, Gabriel; Gerndt, Andreas; Bermano, Amit H.; Kalogerakis, Evangelos
    Most real-time rendering models for atmospheric effects have been designed and optimized for Earth's atmosphere. Some authors have proposed approaches for rendering other atmospheres, but these methods still use approximations that are only valid on Earth. For instance, the iconic blue glow of Martian sunsets can not be represented properly as the complex interference effects of light scattered at dust particles can not be captured by these approximations. In this paper, we present an approach for generalizing an existing model to make it capable of rendering extraterrestrial atmospheres. This is done by replacing the approximations with a physical model based on Mie Theory. We use the particle-size distribution, the particle-density distribution as well as the wavelength-dependent refractive index of atmospheric particles as input. To demonstrate the feasibility of this idea, we extend the model by Bruneton et al. [BN08] and implement it into CosmoScout VR, an open-source visualization of our Solar System. In a first step, we use Mie Theory to precompute the scattering behaviour of a particle mixture. Then, multi-scattering is simulated, and finally the precomputation results are used for real-time rendering. We demonstrate that this not only improves the visualization of the Martian atmosphere, but also creates more realistic results for our own atmosphere.
  • Item
    An Empirically Derived Adjustable Model for Particle Size Distributions in Advection Fog
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Kolárová, Monika; Lachiver, Loïc; Wilkie, Alexander; Bermano, Amit H.; Kalogerakis, Evangelos
    Realistically modelled atmospheric phenomena are a long-standing research topic in rendering. While significant progress has been made in modelling clear skies and clouds, fog has often been simplified as a medium that is homogeneous throughout, or as a simple density gradient. However, these approximations neglect the characteristic variations real advection fog shows throughout its vertical span, and do not provide the particle distribution data needed for accurate rendering. Based on data from meteorological literature, we developed an analytical model that yields the distribution of particle size as a function of altitude within an advection fog layer. The thickness of the fog layer is an additional input parameter, so that fog layers of varying thickness can be realistically represented. We also demonstrate that based on Mie scattering, one can easily integrate this model into a Monte Carlo renderer. Our model is the first ever non-trivial volumetric model for advection fog that is based on real measurement data, and that contains all the components needed for inclusion in a modern renderer. The model is provided as open source component, and can serve as reference for rendering problems that involve fog layers.
  • Item
    BallMerge: High-quality Fast Surface Reconstruction via Voronoi Balls
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Parakkat, Amal Dev; Ohrhallinger, Stefan; Eisemann, Elmar; Memari, Pooran; Bermano, Amit H.; Kalogerakis, Evangelos
    We introduce a Delaunay-based algorithm for reconstructing the underlying surface of a given set of unstructured points in 3D. The implementation is very simple, and it is designed to work in a parameter-free manner. The solution builds upon the fact that in the continuous case, a closed surface separates the set of maximal empty balls (medial balls) into an interior and exterior. Based on discrete input samples, our reconstructed surface consists of the interface between Voronoi balls, which approximate the interior and exterior medial balls. An initial set of Voronoi balls is iteratively processed, merging Voronoi-ball pairs if they fulfil an overlapping error criterion. Our complete open-source reconstruction pipeline performs up to two quick linear-time passes on the Delaunay complex to output the surface, making it an order of magnitude faster than the state of the art while being competitive in memory usage and often superior in quality. We propose two variants (local and global), which are carefully designed to target two different reconstruction scenarios for watertight surfaces from accurate or noisy samples, as well as real-world scanned data sets, exhibiting noise, outliers, and large areas of missing data. The results of the global variant are, by definition, watertight, suitable for numerical analysis and various applications (e.g., 3D printing). Compared to classical Delaunay-based reconstruction techniques, our method is highly stable and robust to noise and outliers, evidenced via various experiments, including on real-world data with challenges such as scan shadows, outliers, and noise, even without additional preprocessing.
  • Item
    Non-Euclidean Sliced Optimal Transport Sampling
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Genest, Baptiste; Courty, Nicolas; Coeurjolly, David; Bermano, Amit H.; Kalogerakis, Evangelos
    In machine learning and computer graphics, a fundamental task is the approximation of a probability density function through a well-dispersed collection of samples. Providing a formal metric for measuring the distance between probability measures on general spaces, Optimal Transport (OT) emerges as a pivotal theoretical framework within this context. However, the associated computational burden is prohibitive in most real-world scenarios. Leveraging the simple structure of OT in 1D, Sliced Optimal Transport (SOT) has appeared as an efficient alternative to generate samples in Euclidean spaces. This paper pushes the boundaries of SOT utilization in computational geometry problems by extending its application to sample densities residing on more diverse mathematical domains, including the spherical space Sd, the hyperbolic plane Hd, and the real projective plane Pd. Moreover, it ensures the quality of these samples by achieving a blue noise characteristic, regardless of the dimensionality involved. The robustness of our approach is highlighted through its application to various geometry processing tasks, such as the intrinsic blue noise sampling of meshes, as well as the sampling of directions and rotations. These applications collectively underscore the efficacy of our methodology.
  • Item
    GLS-PIA: n-Dimensional Spherical B-Spline Curve Fitting based on Geodesic Least Square with Adaptive Knot Placement
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhao, Yuming; Wu, Zhongke; Wang, Xingce; Bermano, Amit H.; Kalogerakis, Evangelos
    Due to the widespread applications of curves on n-dimensional spheres, fitting curves on n-dimensional spheres has received increasing attention in recent years. However, due to the non-Euclidean nature of spheres, curve fitting methods on n-dimensional spheres often struggle to balance fitting accuracy and curve fairness. In this paper, we propose a new fitting framework, GLSPIA, for parameterized point sets on n-dimensional spheres to address the challenge. Meanwhile, we provide the proof of the method. Firstly, we propose a progressive iterative approximation method based on geodesic least squares which can directly optimize the geodesic least squares loss on the n-sphere, improving the accuracy of the fitting. Additionally, we use an error allocation method based on contribution coefficients to ensure the fairness of the fitting curve. Secondly, we propose an adaptive knot placement method based on geodesic difference to estimate a more reasonable distribution of control points in the parameter domain, placing more control points in areas with greater detail. This enables B-spline curves to capture more details with a limited number of control points. Experimental results demonstrate that our framework achieves outstanding performance, especially in handling imbalanced data points. (In this paper, ''sphere'' refers to n-sphere (n = 2) unless otherwise specified.)
  • Item
    Estimating Cloth Simulation Parameters From Tag Information and Cusick Drape Test
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Ju, Eunjung; Kim, Kwang-yun; Yoon, Sungjin; Shim, Eungjune; Kang, Gyoo-Chul; Chang, Phil Sik; Choi, Myung Geol; Bermano, Amit H.; Kalogerakis, Evangelos
    In recent years, the fashion apparel industry has been increasingly employing virtual simulations for the development of new products. The first step in virtual garment simulation involves identifying the optimal simulation parameters that accurately reproduce the drape properties of the actual fabric. Recent techniques advocate for a data-driven approach, estimating parameters from outcomes of a Cusick drape test. Such methods deviate from standard Cusick drape tests, introducing high-cost tools, which reduces practicality. Our research presents a more practical model, utilizing 2D silhouette images from the ISO-standardized Cusick drape test. Notably, while past models have shown limitations in estimating stretching parameters, our novel approach leverages the fabric's tag information including fabric type and fiber composition. Our proposed model functions as a cascaded system: first, it estimates stretching parameters using tag information, then, in the subsequent step, it considers the estimated stretching parameters alongside the fabric sample's Cusick drape test results to determine bending parameters. We validated our model against existing methods and applied it in practical scenarios, showing promising outcomes.
  • Item
    Neural Garment Dynamics via Manifold-Aware Transformers
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Li, Peizhuo; Wang, Tuanfeng Y.; Kesdogan, Timur Levent; Ceylan, Duygu; Sorkine-Hornung, Olga; Bermano, Amit H.; Kalogerakis, Evangelos
    Data driven and learning based solutions for modeling dynamic garments have significantly advanced, especially in the context of digital humans. However, existing approaches often focus on modeling garments with respect to a fixed parametric human body model and are limited to garment geometries that were seen during training. In this work, we take a different approach and model the dynamics of a garment by exploiting its local interactions with the underlying human body. Specifically, as the body moves, we detect local garment-body collisions, which drive the deformation of the garment. At the core of our approach is a mesh-agnostic garment representation and a manifold-aware transformer network design, which together enable our method to generalize to unseen garment and body geometries. We evaluate our approach on a wide variety of garment types and motion sequences and provide competitive qualitative and quantitative results with respect to the state of the art.
  • Item
    Practical Method to Estimate Fabric Mechanics from Metadata
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Dominguez-Elvira, Henar; Nicás, Alicia; Cirio, Gabriel; Rodríguez, Alejandro; Garces, Elena; Bermano, Amit H.; Kalogerakis, Evangelos
    Estimating fabric mechanical properties is crucial to create realistic digital twins. Existing methods typically require testing physical fabric samples with expensive devices or cumbersome capture setups. In this work, we propose a method to estimate fabric mechanics just from known manufacturer metadata such as the fabric family, the density, the composition, and the thickness. Further, to alleviate the need to know the fabric family –which might be ambiguous or unknown for nonspecialists– we propose an end-to-end neural method that works with planar images of the textile as input. We evaluate our methods using extensive tests that include the industry standard Cusick and demonstrate that both of them produce drapes that strongly correlate with the ground truth estimates provided by lab equipment. Our method is the first to propose such a simple capture method for mechanical properties outperforming other methods that require testing the fabric in specific setups.
  • Item
    Polygon Laplacian Made Robust
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Bunge, Astrid; Bukenberger, Dennis R.; Wagner, Sven Dominik; Alexa, Marc; Botsch, Mario; Bermano, Amit H.; Kalogerakis, Evangelos
    Discrete Laplacians are the basis for various tasks in geometry processing. While the most desirable properties of the discretization invariably lead to the so-called cotangent Laplacian for triangle meshes, applying the same principles to polygon Laplacians leaves degrees of freedom in their construction. From linear finite elements it is well-known how the shape of triangles affects both the error and the operator's condition. We notice that shape quality can be encapsulated as the trace of the Laplacian and suggest that trace minimization is a helpful tool to improve numerical behavior. We apply this observation to the polygon Laplacian constructed from a virtual triangulation [BHKB20] to derive optimal parameters per polygon. Moreover, we devise a smoothing approach for the vertices of a polygon mesh to minimize the trace. We analyze the properties of the optimized discrete operators and show their superiority over generic parameter selection in theory and through various experiments.
  • Item
    Advancing Front Surface Mapping
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Livesu, Marco; Bermano, Amit H.; Kalogerakis, Evangelos
    We present Advancing Front Mapping (AFM), a novel algorithm for the computation of injective maps to simple planar domains. AFM is inspired by the advancing front meshing paradigm, which is here revisited to operate on two embeddings at once, becoming a tool for compatible mesh generation. AFM extends the capabilities of existing robust approaches, supporting a broader set of embeddings (star-shaped polygons) with a direct approach, without resorting to intermediate constructions. Our method only relies on two topological operators (split and flip) and on the computation of segment intersections, thus permitting to compute a valid embedding without solving any numerical problem. AFM is therefore easy to implement, debug and deploy. This article is mainly focused on the presentation of the compatible advancing front idea and on the demonstration that the algorithm provably converges to an injective map. We also complement our theoretical analysis with an extensive practical validation, executing more than one billion advancing front moves on 36K mapping tasks.
  • Item
    The Impulse Particle-In-Cell Method
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Sancho, Sergio; Tang, Jingwei; Batty, Christopher; Azevedo, Vinicius C.; Bermano, Amit H.; Kalogerakis, Evangelos
    An ongoing challenge in fluid animation is the faithful preservation of vortical details, which impacts the visual depiction of flows. We propose the Impulse Particle-In-Cell (IPIC) method, a novel extension of the popular Affine Particle-In-Cell (APIC) method that makes use of the impulse gauge formulation of the fluid equations. Our approach performs a coupled advection-stretching during particle-based advection to better preserve circulation and vortical details. The associated algorithmic changes are simple and straightforward to implement, and our results demonstrate that the proposed method is able to achieve more energetic and visually appealing smoke and liquid flows than APIC.
  • Item
    Wavelet Potentials: An Efficient Potential Recovery Technique for Pointwise Incompressible Fluids
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Lyu, Luan; Ren, Xiaohua; Cao, Wei; Zhu, Jian; Wu, Enhua; Yang, Zhi-Xin; Bermano, Amit H.; Kalogerakis, Evangelos
    We introduce an efficient technique for recovering the vector potential in wavelet space to simulate pointwise incompressible fluids. This technique ensures that fluid velocities remain divergence-free at any point within the fluid domain and preserves local volume during the simulation. Divergence-free wavelets are utilized to calculate the wavelet coefficients of the vector potential, resulting in a smooth vector potential with enhanced accuracy, even when the input velocities exhibit some degree of divergence. This enhanced accuracy eliminates the need for additional computational time to achieve a specific accuracy threshold, as fewer iterations are required for the pressure Poisson solver. Additionally, in 3D, since the wavelet transform is taken in-place, only the memory for storing the vector potential is required. These two features make the method remarkably efficient for recovering vector potential for fluid simulation. Furthermore, the method can handle various boundary conditions during the wavelet transform, making it adaptable for simulating fluids with Neumann and Dirichlet boundary conditions. Our approach is highly parallelizable and features a time complexity of O(n), allowing for seamless deployment on GPUs and yielding remarkable computational efficiency. Experiments demonstrate that, taking into account the time consumed by the pressure Poisson solver, the method achieves an approximate 2x speedup on GPUs compared to state-of-the-art vector potential recovery techniques while maintaining a precision level of 10-6 when single float precision is employed. The source code of ’'Wavelet Potentials' can be found in https://github.com/yours321dog/WaveletPotentials.
  • Item
    Monte Carlo Vortical Smoothed Particle Hydrodynamics for Simulating Turbulent Flows
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Ye, Xingyu; Wang, Xiaokun; Xu, Yanrui; Kosinka, Jiri; Telea, Alexandru C.; You, Lihua; Zhang, Jian Jun; Chang, Jian; Bermano, Amit H.; Kalogerakis, Evangelos
    For vortex particle methods relying on SPH-based simulations, the direct approach of iterating all fluid particles to capture velocity from vorticity can lead to a significant computational overhead during the Biot-Savart summation process. To address this challenge, we present a Monte Carlo vortical smoothed particle hydrodynamics (MCVSPH) method for efficiently simulating turbulent flows within an SPH framework. Our approach harnesses a Monte Carlo estimator and operates exclusively within a pre-sampled particle subset, thus eliminating the need for costly global iterations over all fluid particles. Our algorithm is decoupled from various projection loops which enforce incompressibility, independently handles the recovery of turbulent details, and seamlessly integrates with state-of-the-art SPH-based incompressibility solvers. Our approach rectifies the velocity of all fluid particles based on vorticity loss to respect the evolution of vorticity, effectively enforcing vortex motions. We demonstrate, by several experiments, that our MCVSPH method effectively preserves vorticity and creates visually prominent vortical motions.
  • Item
    Computational Smocking through Fabric-Thread Interaction
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhou, Ningfeng; Ren, Jing; Sorkine-Hornung, Olga; Bermano, Amit H.; Kalogerakis, Evangelos
    We formalize Italian smocking, an intricate embroidery technique that gathers flat fabric into pleats along meandering lines of stitches, resulting in pleats that fold and gather where the stitching veers. In contrast to English smocking, characterized by colorful stitches decorating uniformly shaped pleats, and Canadian smocking, which uses localized knots to form voluminous pleats, Italian smocking permits the fabric to move freely along the stitched threads following curved paths, resulting in complex and unpredictable pleats with highly diverse, irregular structures, achieved simply by pulling on the threads. We introduce a novel method for digital previewing of Italian smocking results, given the thread stitching path as input. Our method uses a coarse-grained mass-spring system to simulate the interaction between the threads and the fabric. This configuration guides the fine-level fabric deformation through an adaptation of the state-of-the-art simulator, C-IPC [LKJ21]. Our method models the general problem of fabric-thread interaction and can be readily adapted to preview Canadian smocking as well.We compare our results to baseline approaches and physical fabrications to demonstrate the accuracy of our method.
  • Item
    Unfolding via Mesh Approximation using Surface Flows
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Zawallich, Lars; Pajarola, Renato; Bermano, Amit H.; Kalogerakis, Evangelos
    Manufacturing a 3D object by folding from a 2D material is typically done in four steps: 3D surface approximation, unfolding the surface into a plane, printing and cutting the outline of the unfolded shape, and refolding it to a 3D object. Usually, these steps are treated separately from each other. In this work we jointly address the first two pipeline steps by allowing the 3D representation to smoothly change while unfolding. This way, we increase the chances to overcome possible ununfoldability issues. To join the two pipeline steps, our work proposes and combines different surface flows with a Tabu Unfolder. We empirically investigate the effects that different surface flows have on the performance as well as on the quality of the unfoldings. Additionally, we demonstrate the ability to solve cases by approximation which comparable algorithms either have to segment or can not solve at all.
  • Item
    Freeform Shape Fabrication by Kerfing Stiff Materials
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Speetzen, Nils; Kobbelt, Leif; Bermano, Amit H.; Kalogerakis, Evangelos
    Fast, flexible, and cost efficient production of 3D models from 2D material sheets is a key component in digital fabrication and prototyping. In order to achieve high quality approximations of freeform shapes, a common set of methods aim to produce bendable 2D cutouts that are then assembled. So far bent surfaces are achieved automatically by computing developable patches of the input surface, e.g. in the context of papercraft. For stiff materials such as medium-density fibreboard (MDF) or plywood, the 2D cutouts require the application of additional cutting patterns (''kerfing'') to make them bendable. Such kerf patterns are commonly constructed with considerable user input, e.g. in architectural design. We propose a fully automatic method that produces kerfed cutouts suitable for the assembly of freeform shapes from stiff material sheets. By exploring the degrees of freedom emerging from the choice of bending directions, the creation of box joints at the patch boundaries as well as the application of kerf cuts with adaptive density, our method is able to achieve a high quality approximation of the input.
  • Item
    Physically-based Analytical Erosion for fast Terrain Generation
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Tzathas, Petros; Gailleton, Boris; Steer, Philippe; Cordonnier, Guillaume; Bermano, Amit H.; Kalogerakis, Evangelos
    Terrain generation methods have long been divided between procedural and physically-based. Procedural methods build upon the fast evaluation of a mathematical function but suffer from a lack of geological consistency, while physically-based simulation enforces this consistency at the cost of thousands of iterations unraveling the history of the landscape. In particular, the simulation of the competition between tectonic uplift and fluvial erosion expressed by the stream power law raised recent interest in computer graphics as this allows the generation and control of consistent large-scale mountain ranges, albeit at the cost of a lengthy simulation. In this paper, we explore the analytical solutions of the stream power law and propose a method that is both physically-based and procedural, allowing fast and consistent large-scale terrain generation. In our approach, time is no longer the stopping criterion of an iterative process but acts as the parameter of a mathematical function, a slider that controls the aging of the input terrain from a subtle erosion to the complete replacement by a fully formed mountain range. While analytical solutions have been proposed by the geomorphology community for the 1D case, extending them to a 2D heightmap proves challenging. We propose an efficient implementation of the analytical solutions with a multigrid accelerated iterative process and solutions to incorporate landslides and hillslope processes – two erosion factors that complement the stream power law.
  • Item
    Volcanic Skies: Coupling Explosive Eruptions with Atmospheric Simulation to Create Consistent Skyscapes
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Pretorius, Pieter C.; Gain, James; Lastic, Maud; Cordonnier, Guillaume; Chen, Jiong; Rohmer, Damien; Cani, Marie-Paule; Bermano, Amit H.; Kalogerakis, Evangelos
    Explosive volcanic eruptions rank among the most terrifying natural phenomena, and are thus frequently depicted in films, games, and other media, usually with a bespoke once-off solution. In this paper, we introduce the first general-purpose model for bi-directional interaction between the atmosphere and a volcano plume. In line with recent interactive volcano models, we approximate the plume dynamics with Lagrangian disks and spheres and the atmosphere with sparse layers of 2D Eulerian grids, enabling us to focus on the transfer of physical quantities such as temperature, ash, moisture, and wind velocity between these sub-models. We subsequently generate volumetric animations by noise-based procedural upsampling keyed to aspects of advection, convection, moisture, and ash content to generate a fully-realized volcanic skyscape. Our model captures most of the visually salient features emerging from volcano-sky interaction, such as windswept plumes, enmeshed cap, bell and skirt clouds, shockwave effects, ash rain, and sheathes of lightning visible in the dark.
  • Item
    Navigating the Manifold of Translucent Appearance
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Lanza, Dario; Masia, Belen; Jarabo, Adrian; Bermano, Amit H.; Kalogerakis, Evangelos
    We present a perceptually-motivated manifold for translucent appearance, designed for intuitive editing of translucent materials by navigating through the manifold. Classic tools for editing translucent appearance, based on the use of sliders to tune a number of parameters, are challenging for non-expert users: These parameters have a highly non-linear effect on appearance, and exhibit complex interplay and similarity relations between them. Instead, we pose editing as a navigation task in a low-dimensional space of appearances, which abstracts the user from the underlying optical parameters. To achieve this, we build a low-dimensional continuous manifold of translucent appearance that correlates with how humans perceive this type of materials. We first analyze the correlation of different distance metrics in image space with human perception. We select the best-performing metric to build a low-dimensional manifold, which can be used to navigate the space of translucent appearance. To evaluate the validity of our proposed manifold within its intended application scenario, we build an editing interface that leverages the manifold, and relies on image navigation plus a fine-tuning step to edit appearance. We compare our intuitive interface to a traditional, slider-based one in a user study, demonstrating its effectiveness and superior performance when editing translucent objects.
  • Item
    Perceptual Quality Assessment of NeRF and Neural View Synthesis Methods for Front-Facing Views
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Liang, Hanxue; Wu, Tianhao; Hanji, Param; Banterle, Francesco; Gao, Hongyun; Mantiuk, Rafal; Öztireli, Cengiz; Bermano, Amit H.; Kalogerakis, Evangelos
    Neural view synthesis (NVS) is one of the most successful techniques for synthesizing free viewpoint videos, capable of achieving high fidelity from only a sparse set of captured images. This success has led to many variants of the techniques, each evaluated on a set of test views typically using image quality metrics such as PSNR, SSIM, or LPIPS. There has been a lack of research on how NVS methods perform with respect to perceived video quality. We present the first study on perceptual evaluation of NVS and NeRF variants. For this study, we collected two datasets of scenes captured in a controlled lab environment as well as in-the-wild. In contrast to existing datasets, these scenes come with reference video sequences, allowing us to test for temporal artifacts and subtle distortions that are easily overlooked when viewing only static images. We measured the quality of videos synthesized by several NVS methods in a well-controlled perceptual quality assessment experiment as well as with many existing state-of-the-art image/video quality metrics. We present a detailed analysis of the results and recommendations for dataset and metric selection for NVS evaluation.
  • Item
    Predicting Perceived Gloss: Do Weak Labels Suffice?
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Guerrero-Viu, Julia; Subias, Jose Daniel; Serrano, Ana; Storrs, Katherine R.; Fleming, Roland W.; Masia, Belen; Gutierrez, Diego; Bermano, Amit H.; Kalogerakis, Evangelos
    Estimating perceptual attributes of materials directly from images is a challenging task due to their complex, not fullyunderstood interactions with external factors, such as geometry and lighting. Supervised deep learning models have recently been shown to outperform traditional approaches, but rely on large datasets of human-annotated images for accurate perception predictions. Obtaining reliable annotations is a costly endeavor, aggravated by the limited ability of these models to generalise to different aspects of appearance. In this work, we show how a much smaller set of human annotations (''strong labels'') can be effectively augmented with automatically derived ''weak labels'' in the context of learning a low-dimensional image-computable gloss metric. We evaluate three alternative weak labels for predicting human gloss perception from limited annotated data. Incorporating weak labels enhances our gloss prediction beyond the current state of the art. Moreover, it enables a substantial reduction in human annotation costs without sacrificing accuracy, whether working with rendered images or real photographs.
  • Item
    TailorMe: Self-Supervised Learning of an Anatomically Constrained Volumetric Human Shape Model
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Wenninger, Stephan; Kemper, Fabian; Schwanecke, Ulrich; Botsch, Mario; Bermano, Amit H.; Kalogerakis, Evangelos
    Human shape spaces have been extensively studied, as they are a core element of human shape and pose inference tasks. Classic methods for creating a human shape model register a surface template mesh to a database of 3D scans and use dimensionality reduction techniques, such as Principal Component Analysis, to learn a compact representation. While these shape models enable global shape modifications by correlating anthropometric measurements with the learned subspace, they only provide limited localized shape control. We instead register a volumetric anatomical template, consisting of skeleton bones and soft tissue, to the surface scans of the CAESAR database. We further enlarge our training data to the full Cartesian product of all skeletons and all soft tissues using physically plausible volumetric deformation transfer. This data is then used to learn an anatomically constrained volumetric human shape model in a self-supervised fashion. The resulting TAILORME model enables shape sampling, localized shape manipulation, and fast inference from given surface scans.
  • Item
    CharacterMixer: Rig-Aware Interpolation of 3D Characters
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhan, Xiao; Fu, Rao; Ritchie, Daniel; Bermano, Amit H.; Kalogerakis, Evangelos
    We present CharacterMixer, a system for blending two rigged 3D characters with different mesh and skeleton topologies while maintaining a rig throughout interpolation. CharacterMixer also enables interpolation during motion for such characters, a novel feature. Interpolation is an important shape editing operation, but prior methods have limitations when applied to rigged characters: they either ignore the rig (making interpolated characters no longer posable) or use a fixed rig and mesh topology. To handle different mesh topologies, CharacterMixer uses a signed distance field (SDF) representation of character shapes, with one SDF per bone. To handle different skeleton topologies, it computes a hierarchical correspondence between source and target character skeletons and interpolates the SDFs of corresponding bones. This correspondence also allows the creation of a single ''unified skeleton'' for posing and animating interpolated characters. We show that CharacterMixer produces qualitatively better interpolation results than two state-of-the-art methods while preserving a rig throughout interpolation. Project page: https://seanxzhan.github.io/projects/CharacterMixer.
  • Item
    Stylize My Wrinkles: Bridging the Gap from Simulation to Reality
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Weiss, Sebastian; Stanhope, Jackson; Chandran, Prashanth; Zoss, Gaspard; Bradley, Derek; Bermano, Amit H.; Kalogerakis, Evangelos
    Modeling realistic human skin with pores and wrinkles down to the milli- and micrometer resolution is a challenging task. Prior work showed that such micro geometry can be efficiently generated through simulation methods, or in specialized cases via 3D scanning of real skin. Simulation methods allow to highly customize the wrinkles on the face, but can lead to a synthetic look. Scanning methods can lead to a more organic look for the micro details, however these methods are only applicable to small skin patches due to the required image resolution. In this work we aim to overcome the gap between synthetic simulation and real skin scanning, by proposing a method that can be applied to large skin regions (e.g. an entire face) with the controllability of simulation and the organic look of real micro details. Our method is based on style transfer at its core, where we use scanned displacement maps of real skin patches as style images and displacement maps from an artist-friendly simulation method as content images. We build a library of displacement maps as style images by employing a simplified scanning setup that can capture high-resolution patches of real skin. To create the content component for the style transfer and to facilitate parameter-tuning for the simulation, we design a library of preset parameter values depicting different skin types, and present a new method to fit the simulation parameters to scanned skin patches. This allows fully-automatic parameter generation, interpolation and stylization across entire faces. We evaluate our method by generating realistic skin micro details for various subjects of different ages and genders, and demonstrate that our approach achieves a more organic and natural look than simulation alone.
  • Item
    Enhancing Image Quality Prediction with Self-supervised Visual Masking
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Çogalan, Ugur; Bemana, Mojtaba; Seidel, Hans-Peter; Myszkowski, Karol; Bermano, Amit H.; Kalogerakis, Evangelos
    Full-reference image quality metrics (FR-IQMs) aim to measure the visual differences between a pair of reference and distorted images, with the goal of accurately predicting human judgments. However, existing FR-IQMs, including traditional ones like PSNR and SSIM and even perceptual ones such as HDR-VDP, LPIPS, and DISTS, still fall short in capturing the complexities and nuances of human perception. In this work, rather than devising a novel IQM model, we seek to improve upon the perceptual quality of existing FR-IQM methods. We achieve this by considering visual masking, an important characteristic of the human visual system that changes its sensitivity to distortions as a function of local image content. Specifically, for a given FR-IQM metric, we propose to predict a visual masking model that modulates reference and distorted images in a way that penalizes the visual errors based on their visibility. Since the ground truth visual masks are difficult to obtain, we demonstrate how they can be derived in a self-supervised manner solely based on mean opinion scores (MOS) collected from an FR-IQM dataset. Our approach results in enhanced FR-IQM metrics that are more in line with human prediction both visually and quantitatively.
  • Item
    Enhancing Spatiotemporal Resampling with a Novel MIS Weight
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Pan, Xingyue; Zhang, Jiaxuan; Huang, Jiancong; Liu, Ligang; Bermano, Amit H.; Kalogerakis, Evangelos
    In real-time rendering, optimizing the sampling of large-scale candidates is crucial. The spatiotemporal reservoir resampling (ReSTIR) method provides an effective approach for handling large candidate samples, while the Generalized Resampled Importance Sampling (GRIS) theory provides a general framework for resampling algorithms. However, we have observed that when using the generalized multiple importance sampling (MIS) weight in previous work during spatiotemporal reuse, variances gradually amplify in the candidate domain when there are significant differences. To address this issue, we propose a new MIS weight suitable for resampling that blends samples from different sampling domains, ensuring convergence of results as the proportion of non-canonical samples increases. Additionally, we apply this weight to temporal resampling to reduce noise caused by scene changes or jitter. Our method effectively reduces energy loss in the biased version of ReSTIR DI while incurring no additional overhead, and it also suppresses artifacts caused by a high proportion of temporal samples. As a result, our approach leads to lower variance in the sampling results.
  • Item
    Neural Denoising for Deep-Z Monte Carlo Renderings
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Xianyao; Röthlin, Gerhard; Zhu, Shilin; Aydin, Tunç Ozan; Salehi, Farnood; Gross, Markus; Papas, Marios; Bermano, Amit H.; Kalogerakis, Evangelos
    We present a kernel-predicting neural denoising method for path-traced deep-Z images that facilitates their usage in animation and visual effects production. Deep-Z images provide enhanced flexibility during compositing as they contain color, opacity, and other rendered data at multiple depth-resolved bins within each pixel. However, they are subject to noise, and rendering until convergence is prohibitively expensive. The current state of the art in deep-Z denoising yields objectionable artifacts, and current neural denoising methods are incapable of handling the variable number of depth bins in deep-Z images. Our method extends kernel-predicting convolutional neural networks to address the challenges stemming from denoising deep-Z images. We propose a hybrid reconstruction architecture that combines the depth-resolved reconstruction at each bin with the flattened reconstruction at the pixel level. Moreover, we propose depth-aware neighbor indexing of the depth-resolved inputs to the convolution and denoising kernel application operators, which reduces artifacts caused by depth misalignment present in deep-Z images. We evaluate our method on a production-quality deep-Z dataset, demonstrating significant improvements in denoising quality and performance compared to the current state-of-the-art deep-Z denoiser. By addressing the significant challenge of the cost associated with rendering path-traced deep-Z images, we believe that our approach will pave the way for broader adoption of deep-Z workflows in future productions.
  • Item
    Learning to Stabilize Faces
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Bednarik, Jan; Wood, Erroll; Choutas, Vassilis; Bolkart, Timo; Wang, Daoye; Wu, Chenglei; Beeler, Thabo; Bermano, Amit H.; Kalogerakis, Evangelos
    Nowadays, it is possible to scan faces and automatically register them with high quality. However, the resulting face meshes often need further processing: we need to stabilize them to remove unwanted head movement. Stabilization is important for tasks like game development or movie making which require facial expressions to be cleanly separated from rigid head motion. Since manual stabilization is labor-intensive, there have been attempts to automate it. However, previous methods remain impractical: they either still require some manual input, produce imprecise alignments, rely on dubious heuristics and slow optimization, or assume a temporally ordered input. Instead, we present a new learning-based approach that is simple and fully automatic. We treat stabilization as a regression problem: given two face meshes, our network directly predicts the rigid transform between them that brings their skulls into alignment. We generate synthetic training data using a 3D Morphable Model (3DMM), exploiting the fact that 3DMM parameters separate skull motion from facial skin motion. Through extensive experiments we show that our approach outperforms the state-of-the-art both quantitatively and qualitatively on the tasks of stabilizing discrete sets of facial expressions as well as dynamic facial performances. Furthermore, we provide an ablation study detailing the design choices and best practices to help others adopt our approach for their own uses.
  • Item
    3D Reconstruction and Semantic Modeling of Eyelashes
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Kerbiriou, Glenn; Avril, Quentin; Marchal, Maud; Bermano, Amit H.; Kalogerakis, Evangelos
    High-fidelity digital human modeling has become crucial in various applications, including gaming, visual effects and virtual reality. Despite the significant impact of eyelashes on facial aesthetics, their reconstruction and modeling have been largely unexplored. In this paper, we introduce the first data-driven generative model of eyelashes based on semantic features. This model is derived from real data by introducing a new 3D eyelash reconstruction method based on multi-view images. The reconstructed data is made available which constitutes the first dataset of 3D eyelashes ever published. Through an innovative extraction process, we determine the features of any set of eyelashes, and present detailed descriptive statistics of human eyelashes shapes. The proposed eyelashes model, which exclusively relies on semantic parameters, effectively captures the appearance of a set of eyelashes. Results show that the proposed model enables interactive, intuitive and realistic eyelashes modeling for non-experts, enriching avatar creation and synthetic data generation pipelines.
  • Item
    ShellNeRF: Learning a Controllable High-resolution Model of the Eye and Periocular Region
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Li, Gengyan; Sarkar, Kripasindhu; Meka, Abhimitra; Buehler, Marcel; Mueller, Franziska; Gotardo, Paulo; Hilliges, Otmar; Beeler, Thabo; Bermano, Amit H.; Kalogerakis, Evangelos
    Eye gaze and expressions are crucial non-verbal signals in face-to-face communication. Visual effects and telepresence demand significant improvements in personalized tracking, animation, and synthesis of the eye region to achieve true immersion. Morphable face models, in combination with coordinate-based neural volumetric representations, show promise in solving the difficult problem of reconstructing intricate geometry (eyelashes) and synthesizing photorealistic appearance variations (wrinkles and specularities) of eye performances. We propose a novel hybrid representation - ShellNeRF - that builds a discretized volume around a 3DMM face mesh using concentric surfaces to model the deformable 'periocular' region. We define a canonical space using the UV layout of the shells that constrains the space of dense correspondence search. Combined with an explicit eyeball mesh for modeling corneal light-transport, our model allows for animatable photorealistic 3D synthesis of the whole eye region. Using multi-view video input, we demonstrate significant improvements over state-of-the-art in expression re-enactment and transfer for high-resolution close-up views of the eye region.
  • Item
    Region-Aware Simplification and Stylization of 3D Line Drawings
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Nguyen, Vivien; Fisher, Matthew; Hertzmann, Aaron; Rusinkiewicz, Szymon; Bermano, Amit H.; Kalogerakis, Evangelos
    Shape-conveying line drawings generated from 3D models normally create closed regions in image space. These lines and regions can be stylized to mimic various artistic styles, but for complex objects, the extracted topology is unnecessarily dense, leading to unappealing and unnatural results under stylization. Prior works typically simplify line drawings without considering the regions between them, and lines and regions are stylized separately, then composited together, resulting in unintended inconsistencies. We present a method for joint simplification of lines and regions simultaneously that penalizes large changes to region structure, while keeping regions closed. This feature enables region stylization that remains consistent with the outline curves and underlying 3D geometry.
  • Item
    FontCLIP: A Semantic Typography Visual-Language Model for Multilingual Font Applications
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Tatsukawa, Yuki; Shen, I-Chao; Qi, Anran; Koyama, Yuki; Igarashi, Takeo; Shamir, Ariel; Bermano, Amit H.; Kalogerakis, Evangelos
    Acquiring the desired font for various design tasks can be challenging and requires professional typographic knowledge. While previous font retrieval or generation works have alleviated some of these difficulties, they often lack support for multiple languages and semantic attributes beyond the training data domains. To solve this problem, we present FontCLIP – a model that connects the semantic understanding of a large vision-language model with typographical knowledge. We integrate typographyspecific knowledge into the comprehensive vision-language knowledge of a pretrained CLIP model through a novel finetuning approach. We propose to use a compound descriptive prompt that encapsulates adaptively sampled attributes from a font attribute dataset focusing on Roman alphabet characters. FontCLIP's semantic typographic latent space demonstrates two unprecedented generalization abilities. First, FontCLIP generalizes to different languages including Chinese, Japanese, and Korean (CJK), capturing the typographical features of fonts across different languages, even though it was only finetuned using fonts of Roman characters. Second, FontCLIP can recognize the semantic attributes that are not presented in the training data. FontCLIP's dual-modality and generalization abilities enable multilingual and cross-lingual font retrieval and letter shape optimization, reducing the burden of obtaining desired fonts.
  • Item
    Sketch Video Synthesis
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Zheng, Yudian; Cun, Xiaodong; Xia, Menghan; Pun, Chi-Man; Bermano, Amit H.; Kalogerakis, Evangelos
    Understanding semantic intricacies and high-level concepts is essential in image sketch generation, and this challenge becomes even more formidable when applied to the domain of videos. To address this, we propose a novel optimization-based framework for sketching videos represented by the frame-wise Bézier Curves. In detail, we first propose a cross-frame stroke initialization approach to warm up the location and the width of each curve. Then, we optimize the locations of these curves by utilizing a semantic loss based on CLIP features and a newly designed consistency loss using the self-decomposed 2D atlas network. Built upon these design elements, the resulting sketch video showcases notable visual abstraction and temporal coherence. Furthermore, by transforming a video into vector lines through the sketching process, our method unlocks applications in sketch-based video editing and video doodling, enabled through video composition.
  • Item
    Surface-aware Mesh Texture Synthesis with Pre-trained 2D CNNs
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Kovács, Áron Samuel; Hermosilla, Pedro; Raidou, Renata Georgia; Bermano, Amit H.; Kalogerakis, Evangelos
    Mesh texture synthesis is a key component in the automatic generation of 3D content. Existing learning-based methods have drawbacks-either by disregarding the shape manifold during texture generation or by requiring a large number of different views to mitigate occlusion-related inconsistencies. In this paper, we present a novel surface-aware approach for mesh texture synthesis that overcomes these drawbacks by leveraging the pre-trained weights of 2D Convolutional Neural Networks (CNNs) with the same architecture, but with convolutions designed for 3D meshes. Our proposed network keeps track of the oriented patches surrounding each texel, enabling seamless texture synthesis and retaining local similarity to classical 2D convolutions with square kernels. Our approach allows us to synthesize textures that account for the geometric content of mesh surfaces, eliminating discontinuities and achieving comparable quality to 2D image synthesis algorithms. We compare our approach with state-of-the-art methods where, through qualitative and quantitative evaluations, we demonstrate that our approach is more effective for a variety of meshes and styles, while also producing visually appealing and consistent textures on meshes.
  • Item
    GANtlitz: Ultra High Resolution Generative Model for Multi-Modal Face Textures
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Gruber, Aurel; Collins, Edo; Meka, Abhimitra; Mueller, Franziska; Sarkar, Kripasindhu; Orts-Escolano, Sergio; Prasso, Luca; Busch, Jay; Gross, Markus; Beeler, Thabo; Bermano, Amit H.; Kalogerakis, Evangelos
    High-resolution texture maps are essential to render photoreal digital humans for visual effects or to generate data for machine learning. The acquisition of high resolution assets at scale is cumbersome, it involves enrolling a large number of human subjects, using expensive multi-view camera setups, and significant manual artistic effort to align the textures. To alleviate these problems, we introduce GANtlitz (A play on the german noun Antlitz, meaning face), a generative model that can synthesize multi-modal ultra-high-resolution face appearance maps for novel identities. Our method solves three distinct challenges: 1) unavailability of a very large data corpus generally required for training generative models, 2) memory and computational limitations of training a GAN at ultra-high resolutions, and 3) consistency of appearance features such as skin color, pores and wrinkles in high-resolution textures across different modalities. We introduce dual-style blocks, an extension to the style blocks of the StyleGAN2 architecture, which improve multi-modal synthesis. Our patch-based architecture is trained only on image patches obtained from a small set of face textures (<100) and yet allows us to generate seamless appearance maps of novel identities at 6k×4k resolution. Extensive qualitative and quantitative evaluations and baseline comparisons show the efficacy of our proposed system.
  • Item
    Stylized Face Sketch Extraction via Generative Prior with Limited Data
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Yun, Kwan; Seo, Kwanggyoon; Seo, Chang Wook; Yoon, Soyeon; Kim, Seongcheol; Ji, Soohyun; Ashtari, Amirsaman; Noh, Junyong; Bermano, Amit H.; Kalogerakis, Evangelos
    Facial sketches are both a concise way of showing the identity of a person and a means to express artistic intention. While a few techniques have recently emerged that allow sketches to be extracted in different styles, they typically rely on a large amount of data that is difficult to obtain. Here, we propose StyleSketch, a method for extracting high-resolution stylized sketches from a face image. Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images. The sketch generator utilizes part-based losses with two-stage learning for fast convergence during training for high-quality sketch extraction. Through a set of comparisons, we show that StyleSketch outperforms existing state-of-the-art sketch extraction methods and few-shot image adaptation methods for the task of extracting high-resolution abstract face sketches.We further demonstrate the versatility of StyleSketch by extending its use to other domains and explore the possibility of semantic editing. The project page can be found in https://kwanyun.github.io/stylesketch_project.
  • Item
    Cinematographic Camera Diffusion Model
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Jiang, Hongda; Wang, Xi; Christie, Marc; Liu, Libin; Chen, Baoquan; Bermano, Amit H.; Kalogerakis, Evangelos
    Designing effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despite an elaborate film grammar, forged through years of experience, that enables the specification of camera motions through cinematographic properties (framing, shots sizes, angles, motions), there are endless possibilities in deciding how to place and move cameras with characters. Dealing with these possibilities is part of the complexity of the problem. While numerous techniques have been proposed in the literature (optimization-based solving, encoding of empirical rules, learning from real examples,...), the results either lack variety or ease of control. In this paper, we propose a cinematographic camera diffusion model using a transformer-based architecture to handle temporality and exploit the stochasticity of diffusion models to generate diverse and qualitative trajectories conditioned by high-level textual descriptions. We extend the work by integrating keyframing constraints and the ability to blend naturally between motions using latent interpolation, in a way to augment the degree of control of the designers. We demonstrate the strengths of this text-to-camera motion approach through qualitative and quantitative experiments and gather feedback from professional artists. The code and data are available at https://github.com/jianghd1996/Camera-control.
  • Item
    OptFlowCam: A 3D-Image-Flow-Based Metric in Camera Space for Camera Paths in Scenes with Extreme Scale Variations
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Piotrowski, Lisa; Motejat, Michael; Rössl, Christian; Theisel, Holger; Bermano, Amit H.; Kalogerakis, Evangelos
    Interpolation between camera positions is a standard problem in computer graphics and can be considered the foundation of camera path planning. As the basis for a new interpolation method, we introduce a new Riemannian metric in camera space, which measures the 3D image flow under a small movement of the camera. Building on this, we define a linear interpolation between two cameras as shortest geodesic in camera space, for which we provide a closed-form solution after a mild simplification of the metric. Furthermore, we propose a geodesic Catmull-Rom interpolant for keyframe camera animation. We compare our approach with several standard camera interpolation methods and obtain consistently better camera paths especially for cameras with extremely varying scales.
  • Item
    DivaTrack: Diverse Bodies and Motions from Acceleration-Enhanced 3-Point Trackers
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Yang, Dongseok; Kang, Jiho; Ma, Lingni; Greer, Joseph; Ye, Yuting; Lee, Sung-Hee; Bermano, Amit H.; Kalogerakis, Evangelos
    Full-body avatar presence is important for immersive social and environmental interactions in digital reality. However, current devices only provide three six degrees of freedom (DOF) poses from the headset and two controllers (i.e. three-point trackers). Because it is a highly under-constrained problem, inferring full-body pose from these inputs is challenging, especially when supporting the full range of body proportions and use cases represented by the general population. In this paper, we propose a deep learning framework, DivaTrack, which outperforms existing methods when applied to diverse body sizes and activities. We augment the sparse three-point inputs with linear accelerations from Inertial Measurement Units (IMU) to improve foot contact prediction. We then condition the otherwise ambiguous lower-body pose with the predictions of foot contact and upper-body pose in a two-stage model. We further stabilize the inferred full-body pose in a wide range of configurations by learning to blend predictions that are computed in two reference frames, each of which is designed for different types of motions. We demonstrate the effectiveness of our design on a large dataset that captures 22 subjects performing challenging locomotion for three-point tracking, including lunges, hula-hooping, and sitting. As shown in a live demo using the Meta VR headset and Xsens IMUs, our method runs in real-time while accurately tracking a user's motion when they perform a diverse set of movements.