43-Issue 2
Permanent URI for this collection
Browse
Browsing 43-Issue 2 by Issue Date
Now showing 1 - 20 of 54
Results Per Page
Sort Options
Item Enhancing Spatiotemporal Resampling with a Novel MIS Weight(The Eurographics Association and John Wiley & Sons Ltd., 2024) Pan, Xingyue; Zhang, Jiaxuan; Huang, Jiancong; Liu, Ligang; Bermano, Amit H.; Kalogerakis, EvangelosIn real-time rendering, optimizing the sampling of large-scale candidates is crucial. The spatiotemporal reservoir resampling (ReSTIR) method provides an effective approach for handling large candidate samples, while the Generalized Resampled Importance Sampling (GRIS) theory provides a general framework for resampling algorithms. However, we have observed that when using the generalized multiple importance sampling (MIS) weight in previous work during spatiotemporal reuse, variances gradually amplify in the candidate domain when there are significant differences. To address this issue, we propose a new MIS weight suitable for resampling that blends samples from different sampling domains, ensuring convergence of results as the proportion of non-canonical samples increases. Additionally, we apply this weight to temporal resampling to reduce noise caused by scene changes or jitter. Our method effectively reduces energy loss in the biased version of ReSTIR DI while incurring no additional overhead, and it also suppresses artifacts caused by a high proportion of temporal samples. As a result, our approach leads to lower variance in the sampling results.Item Region-Aware Simplification and Stylization of 3D Line Drawings(The Eurographics Association and John Wiley & Sons Ltd., 2024) Nguyen, Vivien; Fisher, Matthew; Hertzmann, Aaron; Rusinkiewicz, Szymon; Bermano, Amit H.; Kalogerakis, EvangelosShape-conveying line drawings generated from 3D models normally create closed regions in image space. These lines and regions can be stylized to mimic various artistic styles, but for complex objects, the extracted topology is unnecessarily dense, leading to unappealing and unnatural results under stylization. Prior works typically simplify line drawings without considering the regions between them, and lines and regions are stylized separately, then composited together, resulting in unintended inconsistencies. We present a method for joint simplification of lines and regions simultaneously that penalizes large changes to region structure, while keeping regions closed. This feature enables region stylization that remains consistent with the outline curves and underlying 3D geometry.Item Neural Denoising for Deep-Z Monte Carlo Renderings(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Xianyao; Röthlin, Gerhard; Zhu, Shilin; Aydin, Tunç Ozan; Salehi, Farnood; Gross, Markus; Papas, Marios; Bermano, Amit H.; Kalogerakis, EvangelosWe present a kernel-predicting neural denoising method for path-traced deep-Z images that facilitates their usage in animation and visual effects production. Deep-Z images provide enhanced flexibility during compositing as they contain color, opacity, and other rendered data at multiple depth-resolved bins within each pixel. However, they are subject to noise, and rendering until convergence is prohibitively expensive. The current state of the art in deep-Z denoising yields objectionable artifacts, and current neural denoising methods are incapable of handling the variable number of depth bins in deep-Z images. Our method extends kernel-predicting convolutional neural networks to address the challenges stemming from denoising deep-Z images. We propose a hybrid reconstruction architecture that combines the depth-resolved reconstruction at each bin with the flattened reconstruction at the pixel level. Moreover, we propose depth-aware neighbor indexing of the depth-resolved inputs to the convolution and denoising kernel application operators, which reduces artifacts caused by depth misalignment present in deep-Z images. We evaluate our method on a production-quality deep-Z dataset, demonstrating significant improvements in denoising quality and performance compared to the current state-of-the-art deep-Z denoiser. By addressing the significant challenge of the cost associated with rendering path-traced deep-Z images, we believe that our approach will pave the way for broader adoption of deep-Z workflows in future productions.Item ShellNeRF: Learning a Controllable High-resolution Model of the Eye and Periocular Region(The Eurographics Association and John Wiley & Sons Ltd., 2024) Li, Gengyan; Sarkar, Kripasindhu; Meka, Abhimitra; Buehler, Marcel; Mueller, Franziska; Gotardo, Paulo; Hilliges, Otmar; Beeler, Thabo; Bermano, Amit H.; Kalogerakis, EvangelosEye gaze and expressions are crucial non-verbal signals in face-to-face communication. Visual effects and telepresence demand significant improvements in personalized tracking, animation, and synthesis of the eye region to achieve true immersion. Morphable face models, in combination with coordinate-based neural volumetric representations, show promise in solving the difficult problem of reconstructing intricate geometry (eyelashes) and synthesizing photorealistic appearance variations (wrinkles and specularities) of eye performances. We propose a novel hybrid representation - ShellNeRF - that builds a discretized volume around a 3DMM face mesh using concentric surfaces to model the deformable 'periocular' region. We define a canonical space using the UV layout of the shells that constrains the space of dense correspondence search. Combined with an explicit eyeball mesh for modeling corneal light-transport, our model allows for animatable photorealistic 3D synthesis of the whole eye region. Using multi-view video input, we demonstrate significant improvements over state-of-the-art in expression re-enactment and transfer for high-resolution close-up views of the eye region.Item Physically-based Analytical Erosion for fast Terrain Generation(The Eurographics Association and John Wiley & Sons Ltd., 2024) Tzathas, Petros; Gailleton, Boris; Steer, Philippe; Cordonnier, Guillaume; Bermano, Amit H.; Kalogerakis, EvangelosTerrain generation methods have long been divided between procedural and physically-based. Procedural methods build upon the fast evaluation of a mathematical function but suffer from a lack of geological consistency, while physically-based simulation enforces this consistency at the cost of thousands of iterations unraveling the history of the landscape. In particular, the simulation of the competition between tectonic uplift and fluvial erosion expressed by the stream power law raised recent interest in computer graphics as this allows the generation and control of consistent large-scale mountain ranges, albeit at the cost of a lengthy simulation. In this paper, we explore the analytical solutions of the stream power law and propose a method that is both physically-based and procedural, allowing fast and consistent large-scale terrain generation. In our approach, time is no longer the stopping criterion of an iterative process but acts as the parameter of a mathematical function, a slider that controls the aging of the input terrain from a subtle erosion to the complete replacement by a fully formed mountain range. While analytical solutions have been proposed by the geomorphology community for the 1D case, extending them to a 2D heightmap proves challenging. We propose an efficient implementation of the analytical solutions with a multigrid accelerated iterative process and solutions to incorporate landslides and hillslope processes – two erosion factors that complement the stream power law.Item Polygon Laplacian Made Robust(The Eurographics Association and John Wiley & Sons Ltd., 2024) Bunge, Astrid; Bukenberger, Dennis R.; Wagner, Sven Dominik; Alexa, Marc; Botsch, Mario; Bermano, Amit H.; Kalogerakis, EvangelosDiscrete Laplacians are the basis for various tasks in geometry processing. While the most desirable properties of the discretization invariably lead to the so-called cotangent Laplacian for triangle meshes, applying the same principles to polygon Laplacians leaves degrees of freedom in their construction. From linear finite elements it is well-known how the shape of triangles affects both the error and the operator's condition. We notice that shape quality can be encapsulated as the trace of the Laplacian and suggest that trace minimization is a helpful tool to improve numerical behavior. We apply this observation to the polygon Laplacian constructed from a virtual triangulation [BHKB20] to derive optimal parameters per polygon. Moreover, we devise a smoothing approach for the vertices of a polygon mesh to minimize the trace. We analyze the properties of the optimized discrete operators and show their superiority over generic parameter selection in theory and through various experiments.Item Stylized Face Sketch Extraction via Generative Prior with Limited Data(The Eurographics Association and John Wiley & Sons Ltd., 2024) Yun, Kwan; Seo, Kwanggyoon; Seo, Chang Wook; Yoon, Soyeon; Kim, Seongcheol; Ji, Soohyun; Ashtari, Amirsaman; Noh, Junyong; Bermano, Amit H.; Kalogerakis, EvangelosFacial sketches are both a concise way of showing the identity of a person and a means to express artistic intention. While a few techniques have recently emerged that allow sketches to be extracted in different styles, they typically rely on a large amount of data that is difficult to obtain. Here, we propose StyleSketch, a method for extracting high-resolution stylized sketches from a face image. Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images. The sketch generator utilizes part-based losses with two-stage learning for fast convergence during training for high-quality sketch extraction. Through a set of comparisons, we show that StyleSketch outperforms existing state-of-the-art sketch extraction methods and few-shot image adaptation methods for the task of extracting high-resolution abstract face sketches.We further demonstrate the versatility of StyleSketch by extending its use to other domains and explore the possibility of semantic editing. The project page can be found in https://kwanyun.github.io/stylesketch_project.Item Navigating the Manifold of Translucent Appearance(The Eurographics Association and John Wiley & Sons Ltd., 2024) Lanza, Dario; Masia, Belen; Jarabo, Adrian; Bermano, Amit H.; Kalogerakis, EvangelosWe present a perceptually-motivated manifold for translucent appearance, designed for intuitive editing of translucent materials by navigating through the manifold. Classic tools for editing translucent appearance, based on the use of sliders to tune a number of parameters, are challenging for non-expert users: These parameters have a highly non-linear effect on appearance, and exhibit complex interplay and similarity relations between them. Instead, we pose editing as a navigation task in a low-dimensional space of appearances, which abstracts the user from the underlying optical parameters. To achieve this, we build a low-dimensional continuous manifold of translucent appearance that correlates with how humans perceive this type of materials. We first analyze the correlation of different distance metrics in image space with human perception. We select the best-performing metric to build a low-dimensional manifold, which can be used to navigate the space of translucent appearance. To evaluate the validity of our proposed manifold within its intended application scenario, we build an editing interface that leverages the manifold, and relies on image navigation plus a fine-tuning step to edit appearance. We compare our intuitive interface to a traditional, slider-based one in a user study, demonstrating its effectiveness and superior performance when editing translucent objects.Item 3D Reconstruction and Semantic Modeling of Eyelashes(The Eurographics Association and John Wiley & Sons Ltd., 2024) Kerbiriou, Glenn; Avril, Quentin; Marchal, Maud; Bermano, Amit H.; Kalogerakis, EvangelosHigh-fidelity digital human modeling has become crucial in various applications, including gaming, visual effects and virtual reality. Despite the significant impact of eyelashes on facial aesthetics, their reconstruction and modeling have been largely unexplored. In this paper, we introduce the first data-driven generative model of eyelashes based on semantic features. This model is derived from real data by introducing a new 3D eyelash reconstruction method based on multi-view images. The reconstructed data is made available which constitutes the first dataset of 3D eyelashes ever published. Through an innovative extraction process, we determine the features of any set of eyelashes, and present detailed descriptive statistics of human eyelashes shapes. The proposed eyelashes model, which exclusively relies on semantic parameters, effectively captures the appearance of a set of eyelashes. Results show that the proposed model enables interactive, intuitive and realistic eyelashes modeling for non-experts, enriching avatar creation and synthetic data generation pipelines.Item Volcanic Skies: Coupling Explosive Eruptions with Atmospheric Simulation to Create Consistent Skyscapes(The Eurographics Association and John Wiley & Sons Ltd., 2024) Pretorius, Pieter C.; Gain, James; Lastic, Maud; Cordonnier, Guillaume; Chen, Jiong; Rohmer, Damien; Cani, Marie-Paule; Bermano, Amit H.; Kalogerakis, EvangelosExplosive volcanic eruptions rank among the most terrifying natural phenomena, and are thus frequently depicted in films, games, and other media, usually with a bespoke once-off solution. In this paper, we introduce the first general-purpose model for bi-directional interaction between the atmosphere and a volcano plume. In line with recent interactive volcano models, we approximate the plume dynamics with Lagrangian disks and spheres and the atmosphere with sparse layers of 2D Eulerian grids, enabling us to focus on the transfer of physical quantities such as temperature, ash, moisture, and wind velocity between these sub-models. We subsequently generate volumetric animations by noise-based procedural upsampling keyed to aspects of advection, convection, moisture, and ash content to generate a fully-realized volcanic skyscape. Our model captures most of the visually salient features emerging from volcano-sky interaction, such as windswept plumes, enmeshed cap, bell and skirt clouds, shockwave effects, ash rain, and sheathes of lightning visible in the dark.Item Strokes2Surface: Recovering Curve Networks From 4D Architectural Design Sketches(The Eurographics Association and John Wiley & Sons Ltd., 2024) Rasoulzadeh, Shervin; Wimmer, Michael; Stauss, Philipp; Kovacic, Iva; Bermano, Amit H.; Kalogerakis, EvangelosWe present Strokes2Surface, an offline geometry reconstruction pipeline that recovers well-connected curve networks from imprecise 4D sketches to bridge concept design and digital modeling stages in architectural design. The input to our pipeline consists of 3D strokes' polyline vertices and their timestamps as the 4th dimension, along with additional metadata recorded throughout sketching. Inspired by architectural sketching practices, our pipeline combines a classifier and two clustering models to achieve its goal. First, with a set of extracted hand-engineered features from the sketch, the classifier recognizes the type of individual strokes between those depicting boundaries (Shape strokes) and those depicting enclosed areas (Scribble strokes). Next, the two clustering models parse strokes of each type into distinct groups, each representing an individual edge or face of the intended architectural object. Curve networks are then formed through topology recovery of consolidated Shape clusters and surfaced using Scribble clusters guiding the cycle discovery. Our evaluation is threefold: We confirm the usability of the Strokes2Surface pipeline in architectural design use cases via a user study, we validate our choice of features via statistical analysis and ablation studies on our collected dataset, and we compare our outputs against a range of reconstructions computed using alternative methods.Item Single-Image SVBRDF Estimation with Learned Gradient Descent(The Eurographics Association and John Wiley & Sons Ltd., 2024) Luo, Xuejiao; Scandolo, Leonardo; Bousseau, Adrien; Eisemann, Elmar; Bermano, Amit H.; Kalogerakis, EvangelosRecovering spatially-varying materials from a single photograph of a surface is inherently ill-posed, making the direct application of a gradient descent on the reflectance parameters prone to poor minima. Recent methods leverage deep learning either by directly regressing reflectance parameters using feed-forward neural networks or by learning a latent space of SVBRDFs using encoder-decoder or generative adversarial networks followed by a gradient-based optimization in latent space. The former is fast but does not account for the likelihood of the prediction, i.e., how well the resulting reflectance explains the input image. The latter provides a strong prior on the space of spatially-varying materials, but this prior can hinder the reconstruction of images that are too different from the training data. Our method combines the strengths of both approaches. We optimize reflectance parameters to best reconstruct the input image using a recurrent neural network, which iteratively predicts how to update the reflectance parameters given the gradient of the reconstruction likelihood. By combining a learned prior with a likelihood measure, our approach provides a maximum a posteriori estimate of the SVBRDF. Our evaluation shows that this learned gradient-descent method achieves state-of-the-art performance for SVBRDF estimation on synthetic and real images.Item An Empirically Derived Adjustable Model for Particle Size Distributions in Advection Fog(The Eurographics Association and John Wiley & Sons Ltd., 2024) Kolárová, Monika; Lachiver, Loïc; Wilkie, Alexander; Bermano, Amit H.; Kalogerakis, EvangelosRealistically modelled atmospheric phenomena are a long-standing research topic in rendering. While significant progress has been made in modelling clear skies and clouds, fog has often been simplified as a medium that is homogeneous throughout, or as a simple density gradient. However, these approximations neglect the characteristic variations real advection fog shows throughout its vertical span, and do not provide the particle distribution data needed for accurate rendering. Based on data from meteorological literature, we developed an analytical model that yields the distribution of particle size as a function of altitude within an advection fog layer. The thickness of the fog layer is an additional input parameter, so that fog layers of varying thickness can be realistically represented. We also demonstrate that based on Mie scattering, one can easily integrate this model into a Monte Carlo renderer. Our model is the first ever non-trivial volumetric model for advection fog that is based on real measurement data, and that contains all the components needed for inclusion in a modern renderer. The model is provided as open source component, and can serve as reference for rendering problems that involve fog layers.Item Wavelet Potentials: An Efficient Potential Recovery Technique for Pointwise Incompressible Fluids(The Eurographics Association and John Wiley & Sons Ltd., 2024) Lyu, Luan; Ren, Xiaohua; Cao, Wei; Zhu, Jian; Wu, Enhua; Yang, Zhi-Xin; Bermano, Amit H.; Kalogerakis, EvangelosWe introduce an efficient technique for recovering the vector potential in wavelet space to simulate pointwise incompressible fluids. This technique ensures that fluid velocities remain divergence-free at any point within the fluid domain and preserves local volume during the simulation. Divergence-free wavelets are utilized to calculate the wavelet coefficients of the vector potential, resulting in a smooth vector potential with enhanced accuracy, even when the input velocities exhibit some degree of divergence. This enhanced accuracy eliminates the need for additional computational time to achieve a specific accuracy threshold, as fewer iterations are required for the pressure Poisson solver. Additionally, in 3D, since the wavelet transform is taken in-place, only the memory for storing the vector potential is required. These two features make the method remarkably efficient for recovering vector potential for fluid simulation. Furthermore, the method can handle various boundary conditions during the wavelet transform, making it adaptable for simulating fluids with Neumann and Dirichlet boundary conditions. Our approach is highly parallelizable and features a time complexity of O(n), allowing for seamless deployment on GPUs and yielding remarkable computational efficiency. Experiments demonstrate that, taking into account the time consumed by the pressure Poisson solver, the method achieves an approximate 2x speedup on GPUs compared to state-of-the-art vector potential recovery techniques while maintaining a precision level of 10-6 when single float precision is employed. The source code of ’'Wavelet Potentials' can be found in https://github.com/yours321dog/WaveletPotentials.Item GANtlitz: Ultra High Resolution Generative Model for Multi-Modal Face Textures(The Eurographics Association and John Wiley & Sons Ltd., 2024) Gruber, Aurel; Collins, Edo; Meka, Abhimitra; Mueller, Franziska; Sarkar, Kripasindhu; Orts-Escolano, Sergio; Prasso, Luca; Busch, Jay; Gross, Markus; Beeler, Thabo; Bermano, Amit H.; Kalogerakis, EvangelosHigh-resolution texture maps are essential to render photoreal digital humans for visual effects or to generate data for machine learning. The acquisition of high resolution assets at scale is cumbersome, it involves enrolling a large number of human subjects, using expensive multi-view camera setups, and significant manual artistic effort to align the textures. To alleviate these problems, we introduce GANtlitz (A play on the german noun Antlitz, meaning face), a generative model that can synthesize multi-modal ultra-high-resolution face appearance maps for novel identities. Our method solves three distinct challenges: 1) unavailability of a very large data corpus generally required for training generative models, 2) memory and computational limitations of training a GAN at ultra-high resolutions, and 3) consistency of appearance features such as skin color, pores and wrinkles in high-resolution textures across different modalities. We introduce dual-style blocks, an extension to the style blocks of the StyleGAN2 architecture, which improve multi-modal synthesis. Our patch-based architecture is trained only on image patches obtained from a small set of face textures (<100) and yet allows us to generate seamless appearance maps of novel identities at 6k×4k resolution. Extensive qualitative and quantitative evaluations and baseline comparisons show the efficacy of our proposed system.Item Monte Carlo Vortical Smoothed Particle Hydrodynamics for Simulating Turbulent Flows(The Eurographics Association and John Wiley & Sons Ltd., 2024) Ye, Xingyu; Wang, Xiaokun; Xu, Yanrui; Kosinka, Jiri; Telea, Alexandru C.; You, Lihua; Zhang, Jian Jun; Chang, Jian; Bermano, Amit H.; Kalogerakis, EvangelosFor vortex particle methods relying on SPH-based simulations, the direct approach of iterating all fluid particles to capture velocity from vorticity can lead to a significant computational overhead during the Biot-Savart summation process. To address this challenge, we present a Monte Carlo vortical smoothed particle hydrodynamics (MCVSPH) method for efficiently simulating turbulent flows within an SPH framework. Our approach harnesses a Monte Carlo estimator and operates exclusively within a pre-sampled particle subset, thus eliminating the need for costly global iterations over all fluid particles. Our algorithm is decoupled from various projection loops which enforce incompressibility, independently handles the recovery of turbulent details, and seamlessly integrates with state-of-the-art SPH-based incompressibility solvers. Our approach rectifies the velocity of all fluid particles based on vorticity loss to respect the evolution of vorticity, effectively enforcing vortex motions. We demonstrate, by several experiments, that our MCVSPH method effectively preserves vorticity and creates visually prominent vortical motions.Item Cinematographic Camera Diffusion Model(The Eurographics Association and John Wiley & Sons Ltd., 2024) Jiang, Hongda; Wang, Xi; Christie, Marc; Liu, Libin; Chen, Baoquan; Bermano, Amit H.; Kalogerakis, EvangelosDesigning effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despite an elaborate film grammar, forged through years of experience, that enables the specification of camera motions through cinematographic properties (framing, shots sizes, angles, motions), there are endless possibilities in deciding how to place and move cameras with characters. Dealing with these possibilities is part of the complexity of the problem. While numerous techniques have been proposed in the literature (optimization-based solving, encoding of empirical rules, learning from real examples,...), the results either lack variety or ease of control. In this paper, we propose a cinematographic camera diffusion model using a transformer-based architecture to handle temporality and exploit the stochasticity of diffusion models to generate diverse and qualitative trajectories conditioned by high-level textual descriptions. We extend the work by integrating keyframing constraints and the ability to blend naturally between motions using latent interpolation, in a way to augment the degree of control of the designers. We demonstrate the strengths of this text-to-camera motion approach through qualitative and quantitative experiments and gather feedback from professional artists. The code and data are available at https://github.com/jianghd1996/Camera-control.Item Neural Semantic Surface Maps(The Eurographics Association and John Wiley & Sons Ltd., 2024) Morreale, Luca; Aigerman, Noam; Kim, Vladimir G.; Mitra, Niloy J.; Bermano, Amit H.; Kalogerakis, EvangelosWe present an automated technique for computing a map between two genus-zero shapes, which matches semantically corresponding regions to one another. Lack of annotated data prohibits direct inference of 3D semantic priors; instead, current state-of-the-art methods predominantly optimize geometric properties or require varying amounts of manual annotation. To overcome the lack of annotated training data, we distill semantic matches from pre-trained vision models: our method renders the pair of untextured 3D shapes from multiple viewpoints; the resulting renders are then fed into an off-the-shelf imagematching strategy that leverages a pre-trained visual model to produce feature points. This yields semantic correspondences, which are projected back to the 3D shapes, producing a raw matching that is inaccurate and inconsistent across different viewpoints. These correspondences are refined and distilled into an inter-surface map by a dedicated optimization scheme, which promotes bijectivity and continuity of the output map. We illustrate that our approach can generate semantic surface-to-surface maps, eliminating manual annotations or any 3D training data requirement. Furthermore, it proves effective in scenarios with high semantic complexity, where objects are non-isometrically related, as well as in situations where they are nearly isometric.Item Surface-aware Mesh Texture Synthesis with Pre-trained 2D CNNs(The Eurographics Association and John Wiley & Sons Ltd., 2024) Kovács, Áron Samuel; Hermosilla, Pedro; Raidou, Renata Georgia; Bermano, Amit H.; Kalogerakis, EvangelosMesh texture synthesis is a key component in the automatic generation of 3D content. Existing learning-based methods have drawbacks-either by disregarding the shape manifold during texture generation or by requiring a large number of different views to mitigate occlusion-related inconsistencies. In this paper, we present a novel surface-aware approach for mesh texture synthesis that overcomes these drawbacks by leveraging the pre-trained weights of 2D Convolutional Neural Networks (CNNs) with the same architecture, but with convolutions designed for 3D meshes. Our proposed network keeps track of the oriented patches surrounding each texel, enabling seamless texture synthesis and retaining local similarity to classical 2D convolutions with square kernels. Our approach allows us to synthesize textures that account for the geometric content of mesh surfaces, eliminating discontinuities and achieving comparable quality to 2D image synthesis algorithms. We compare our approach with state-of-the-art methods where, through qualitative and quantitative evaluations, we demonstrate that our approach is more effective for a variety of meshes and styles, while also producing visually appealing and consistent textures on meshes.Item Stylize My Wrinkles: Bridging the Gap from Simulation to Reality(The Eurographics Association and John Wiley & Sons Ltd., 2024) Weiss, Sebastian; Stanhope, Jackson; Chandran, Prashanth; Zoss, Gaspard; Bradley, Derek; Bermano, Amit H.; Kalogerakis, EvangelosModeling realistic human skin with pores and wrinkles down to the milli- and micrometer resolution is a challenging task. Prior work showed that such micro geometry can be efficiently generated through simulation methods, or in specialized cases via 3D scanning of real skin. Simulation methods allow to highly customize the wrinkles on the face, but can lead to a synthetic look. Scanning methods can lead to a more organic look for the micro details, however these methods are only applicable to small skin patches due to the required image resolution. In this work we aim to overcome the gap between synthetic simulation and real skin scanning, by proposing a method that can be applied to large skin regions (e.g. an entire face) with the controllability of simulation and the organic look of real micro details. Our method is based on style transfer at its core, where we use scanned displacement maps of real skin patches as style images and displacement maps from an artist-friendly simulation method as content images. We build a library of displacement maps as style images by employing a simplified scanning setup that can capture high-resolution patches of real skin. To create the content component for the style transfer and to facilitate parameter-tuning for the simulation, we design a library of preset parameter values depicting different skin types, and present a new method to fit the simulation parameters to scanned skin patches. This allows fully-automatic parameter generation, interpolation and stylization across entire faces. We evaluate our method by generating realistic skin micro details for various subjects of different ages and genders, and demonstrate that our approach achieves a more organic and natural look than simulation alone.
- «
- 1 (current)
- 2
- 3
- »