Browsing by Author "Papas, Marios"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item 2017 Cover Image: Mixing Bowl(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Marra, Alessia; Nitti, Maurizio; Papas, Marios; Müller, Thomas; Gross, Markus; Jarosz, Wojciech; ovák, Jan; Chen, Min and Zhang, Hao (Richard)Item Automatic Feature Selection for Denoising Volumetric Renderings(The Eurographics Association and John Wiley & Sons Ltd., 2022) Zhang, Xianyao; Ott, Melvin; Manzi, Marco; Gross, Markus; Papas, Marios; Ghosh, Abhijeet; Wei, Li-YiWe propose a method for constructing feature sets that significantly improve the quality of neural denoisers for Monte Carlo renderings with volumetric content. Starting from a large set of hand-crafted features, we propose a feature selection process to identify significantly pruned near-optimal subsets. While a naive approach would require training and testing a separate denoiser for every possible feature combination, our selection process requires training of only a single probe denoiser for the selection task. Moreover, our approximate solution has an asymptotic complexity that is quadratic to the number of features compared to the exponential complexity of the naive approach, while also producing near-optimal solutions. We demonstrate the usefulness of our approach on various state-of-the-art denoising methods for volumetric content. We observe improvements in denoising quality when using our automatically selected feature sets over the hand-crafted sets proposed by the original methods.Item A computational appearance fabrication framework and derived applications(ETH Zurich, 2015) Papas, MariosTraditionally, control over the appearance of objects in the real world was performed manually. Understanding how some physical property of an object would affect its appearance was achieved primarily through trial and error. This procedure could be lengthy and cumbersome, depending on the complexity of the effect of physical properties on appearance and the duration of each fabrication cycle. Precise control of how light interacts with materials has many applications in arts, architecture, industrial design, and engineering. With the recent achievements in geometry retrieval and computational fabrication we are now able to precisely control and replicate the geometry of real-world objects. On the other hand, computational appearance fabrication is still in its infancy. In this thesis we lay he foundation for a general computational appearance fabrication framework, and we demonstrate a range of applications that benefit from it. We present various instances of our framework and detail the design of the corresponding components, such as: forward and backward appearance models, measurement, and fabrication. These framework instances help in understanding and controlling the appearance of three general classes of materials: homogeneous participating media (such as wax and milk), specular surfaces (such as lenses), and granular media (such as sugar and snow). More specifically we show how we can precisely measure, control, and fabricate the real-world appearance of homogeneous translucent materials, how to computationally design and fabricate steganographic lenses, and finally we present a fast appearance model for accurately simulating the appearance of granular media.Item Deep Compositional Denoising for High-quality Monte Carlo Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2021) Zhang, Xianyao; Manzi, Marco; Vogels, Thijs; Dahlberg, Henrik; Gross, Markus; Papas, Marios; Bousseau, Adrien and McGuire, MorganWe propose a deep-learning method for automatically decomposing noisy Monte Carlo renderings into components that kernelpredicting denoisers can denoise more effectively. In our model, a neural decomposition module learns to predict noisy components and corresponding feature maps, which are consecutively reconstructed by a denoising module. The components are predicted based on statistics aggregated at the pixel level by the renderer. Denoising these components individually allows the use of per-component kernels that adapt to each component's noisy signal characteristics. Experimentally, we show that the proposed decomposition module consistently improves the denoising quality of current state-of-the-art kernel-predicting denoisers on large-scale academic and production datasets.Item Deep Compositional Denoising on Frame Sequences(The Eurographics Association, 2023) Zhang, Xianyao; Röthlin, Gerhard; Manzi, Marco; Gross, Markus; Papas, Marios; Ritschel, Tobias; Weidlich, AndreaPath tracing is the prevalent rendering algorithm in the animated movies and visual effects industry, thanks to its simplicity and ability to render physically plausible lighting effects. However, we must simulate millions of light paths before producing one final image, and error manifests as noise during rendering. In fact, it can take tens or even hundreds of CPU hours on a modern computer to render a plausible frame in a recent animated movie. Movie production and the VFX industry rely on image-based denoising algorithms to ameliorate the rendering cost, which suppresses the noise due to rendering by reusing information in the neighborhood of the pixels both spatially and temporally.Item NeRF-Tex: Neural Reflectance Field Textures(The Eurographics Association, 2021) Baatz, Hendrik; Granskog, Jonathan; Papas, Marios; Rousselle, Fabrice; Novák, Jan; Bousseau, Adrien and McGuire, MorganWe investigate the use of neural fields for modeling diverse mesoscale structures, such as fur, fabric, and grass. Instead of using classical graphics primitives to model the structure, we propose to employ a versatile volumetric primitive represented by a neural reflectance field (NeRF-Tex), which jointly models the geometry of the material and its response to lighting. The NeRF-Tex primitive can be instantiated over a base mesh to ''texture'' it with the desired meso and microscale appearance. We condition the reflectance field on user-defined parameters that control the appearance. A single NeRF texture thus captures an entire space of reflectance fields rather than one specific structure. This increases the gamut of appearances that can be modeled and provides a solution for combating repetitive texturing artifacts. We also demonstrate that NeRF textures naturally facilitate continuous level-of-detail rendering. Our approach unites the versatility and modeling power of neural networks with the artistic control needed for precise modeling of virtual scenes. While all our training data is currently synthetic, our work provides a recipe that can be further extended to extract complex, hard-to-model appearances from real images.Item Path Guiding Using Spatio‐Directional Mixture Models(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2022) Dodik, Ana; Papas, Marios; Öztireli, Cengiz; Müller, Thomas; Hauser, Helwig and Alliez, PierreWe propose a learning‐based method for light‐path construction in path tracing algorithms, which iteratively optimizes and samples from what we refer to as spatio‐directional Gaussian mixture models (SDMMs). In particular, we approximate incident radiance as an online‐trained 5D mixture that is accelerated by a D‐tree. Using the same framework, we approximate BSDFs as pre‐trained D mixtures, where is the number of BSDF parameters. Such an approach addresses two major challenges in path‐guiding models. First, the 5D radiance representation naturally captures correlation between the spatial and directional dimensions. Such correlations are present in, for example parallax and caustics. Second, by using a tangent‐space parameterization of Gaussians, our spatio‐directional mixtures can perform approximate product sampling with arbitrarily oriented BSDFs. Existing models are only able to do this by either foregoing anisotropy of the mixture components or by representing the radiance field in local (normal aligned) coordinates, which both make the radiance field more difficult to learn. An additional benefit of the tangent‐space parameterization is that each individual Gaussian is mapped to the solid sphere with low distortion near its centre of mass. Our method performs especially well on scenes with small, localized luminaires that induce high spatio‐directional correlation in the incident radiance.