EG 2020 - STARs (CGF 39-2)

Permanent URI for this collection

State of the Art Reports
A Survey of Temporal Antialiasing Techniques
Lei Yang, Shiqiu Liu, and Marco Salvi
A Survey of Multifragment Rendering
Andreas Alexandros Vasilakis, Konstantinos Vardis, and Georgios Papaioannou
Learning Generative Models of 3D Structures
Siddhartha Chaudhuri, Daniel Ritchie, Jiajun Wu, Kai Xu, and Hao Zhang
State-of-the-art in Automatic 3D Reconstruction of Structured Indoor Environments
Giovanni Pintore, Claudio Mura, Fabio Ganovelli, Lizeth Joseline Fuentes-Perez, Renato Pajarola, and Enrico Gobbetti
State of the Art on Neural Rendering
Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B. Goldman, and Michael Zollhöfer
Survey of Models for Acquiring the Optical Properties of Translucent Materials
Jeppe Revall Frisvad, Søren Alkærsig Jensen, Jonas Skovlund Madsen, António Correia, Li Yang, Søren K. S. Gregersen, Youri Meuret, and Poul-Erik Hansen
A Survey on Sketch Based Content Creation: from the Desktop to Virtual and Augmented Reality
Sukanya Bhattacharjee and Parag Chaudhuri

Browse

Recent Submissions

Now showing 1 - 8 of 8
  • Item
    EUROGRAPHICS 2020: CGF 39-2 STARs Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Mantiuk, Rafal; Sundstedt, Veronica; Mantiuk, Rafal and Sundstedt, Veronica
    -
  • Item
    A Survey of Temporal Antialiasing Techniques
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Yang, Lei; Liu, Shiqiu; Salvi, Marco; Mantiuk, Rafal and Sundstedt, Veronica
    Temporal Antialiasing (TAA), formally defined as temporally-amortized supersampling, is the most widely used antialiasing technique in today's real-time renderers and game engines. This survey provides a systematic overview of this technique. We first review the history of TAA, its development path and related work. We then identify the two main sub-components of TAA, sample accumulation and history validation, and discuss algorithmic and implementation options. As temporal upsampling is becoming increasingly relevant to today's game engines, we propose an extension of our TAA formulation to cover a variety of temporal upsampling techniques. Despite the popularity of TAA, there are still significant unresolved technical challenges that affect image quality in many scenarios. We provide an in-depth analysis of these challenges, and review existing techniques for improvements. Finally, we summarize popular algorithms and topics that are closely related to TAA. We believe the rapid advances in those areas may either benefit from or feedback into TAA research and development.
  • Item
    Learning Generative Models of 3D Structures
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Chaudhuri, Siddhartha; Ritchie, Daniel; Wu, Jiajun; Xu, Kai; Zhang, Hao; Mantiuk, Rafal and Sundstedt, Veronica
    3D models of objects and scenes are critical to many academic disciplines and industrial applications. Of particular interest is the emerging opportunity for 3D graphics to serve artificial intelligence: computer vision systems can benefit from syntheticallygenerated training data rendered from virtual 3D scenes, and robots can be trained to navigate in and interact with real-world environments by first acquiring skills in simulated ones. One of the most promising ways to achieve this is by learning and applying generative models of 3D content: computer programs that can synthesize new 3D shapes and scenes. To allow users to edit and manipulate the synthesized 3D content to achieve their goals, the generative model should also be structure-aware: it should express 3D shapes and scenes using abstractions that allow manipulation of their high-level structure. This state-of-theart report surveys historical work and recent progress on learning structure-aware generative models of 3D shapes and scenes. We present fundamental representations of 3D shape and scene geometry and structures, describe prominent methodologies including probabilistic models, deep generative models, program synthesis, and neural networks for structured data, and cover many recent methods for structure-aware synthesis of 3D shapes and indoor scenes.
  • Item
    A Survey of Multifragment Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Vasilakis, Andreas Alexandros; Vardis, Konstantinos; Papaioannou, Georgios; Mantiuk, Rafal and Sundstedt, Veronica
    In the past few years, advances in graphics hardware have fuelled an explosion of research and development in the field of interactive and real-time rendering in screen space. Following this trend, a rapidly increasing number of applications rely on multifragment rendering solutions to develop visually convincing graphics applications with dynamic content. The main advantage of these approaches is that they encompass additional rasterised geometry, by retaining more information from the fragment sampling domain, thus augmenting the visibility determination stage. With this survey, we provide an overview of and insight into the extensive, yet active research and respective literature on multifragment rendering. We formally present the multifragment rendering pipeline, clearly identifying the construction strategies, the core image operation categories and their mapping to the respective applications. We describe features and trade-offs for each class of techniques, pointing out GPU optimisations and limitations and provide practical recommendations for choosing an appropriate method for each application. Finally, we offer fruitful context for discussion by outlining some existing problems and challenges as well as by presenting opportunities for impactful future research directions.
  • Item
    State-of-the-art in Automatic 3D Reconstruction of Structured Indoor Environments
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Pintore, Giovanni; Mura, Claudio; Ganovelli, Fabio; Fuentes-Perez, Lizeth Joseline; Pajarola, Renato; Gobbetti, Enrico; Mantiuk, Rafal and Sundstedt, Veronica
    Creating high-level structured 3D models of real-world indoor scenes from captured data is a fundamental task which has important applications in many fields. Given the complexity and variability of interior environments and the need to cope with noisy and partial captured data, many open research problems remain, despite the substantial progress made in the past decade. In this survey, we provide an up-to-date integrative view of the field, bridging complementary views coming from computer graphics and computer vision. After providing a characterization of input sources, we define the structure of output models and the priors exploited to bridge the gap between imperfect sources and desired output. We then identify and discuss the main components of a structured reconstruction pipeline, and review how they are combined in scalable solutions working at the building level. We finally point out relevant research issues and analyze research trends.
  • Item
    Survey of Models for Acquiring the Optical Properties of Translucent Materials
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Frisvad, Jeppe Revall; Jensen, Søren Alkærsig; Madsen, Jonas Skovlund; Correia, António; Yang, Li; Gregersen, Søren K. S.; Meuret, Youri; Hansen, Poul-Erik; Mantiuk, Rafal and Sundstedt, Veronica
    The outset of realistic rendering is a desire to reproduce the appearance of the real world. Rendering techniques therefore operate at a scale corresponding to the size of objects that we observe with our naked eyes. At the same time, rendering techniques must be able to deal with objects of nearly arbitrary shapes and materials. These requirements lead to techniques that oftentimes leave the task of setting the optical properties of the materials to the user. Matching the appearance of real objects by manual adjustment of optical properties is however nearly impossible. We can render objects with a plausible appearance in this way but cannot compare the appearance of a manufactured item to that of its digital twin. This is especially true in the case of translucent objects, where we need more than a goniometric measurement of the optical properties. In this survey, we provide an overview of forward and inverse models for acquiring the optical properties of translucent materials. We map out the efforts in graphics research in this area and describe techniques available in related fields. Our objective is to provide a better understanding of the tools currently available for appearance specification when it comes to digital representations of real translucent objects.
  • Item
    State of the Art on Neural Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Tewari, Ayush; Fried, Ohad; Thies, Justus; Sitzmann, Vincent; Lombardi, Stephen; Sunkavalli, Kalyan; Martin-Brualla, Ricardo; Simon, Tomas; Saragih, Jason; Nießner, Matthias; Pandey, Rohit; Fanello, Sean; Wetzstein, Gordon; Zhu, Jun-Yan; Theobalt, Christian; Agrawala, Maneesh; Shechtman, Eli; Goldman, Dan B.; Zollhöfer, Michael; Mantiuk, Rafal and Sundstedt, Veronica
    Efficient rendering of photo-realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo-realistic images from hand-crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo-realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state-of-the-art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photorealistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. Specifically, our emphasis is on the type of control, i.e., how the control is provided, which parts of the pipeline are learned, explicit vs. implicit control, generalization, and stochastic vs. deterministic synthesis. The second half of this state-of-the-art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems.
  • Item
    A Survey on Sketch Based Content Creation: from the Desktop to Virtual and Augmented Reality
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Bhattacharjee, Sukanya; Chaudhuri, Parag; Mantiuk, Rafal and Sundstedt, Veronica
    Sketching is one of the most natural ways for representing any object pictorially. It is however, challenging to convert sketches to 3D content that is suitable for various applications like movies, games and computer aided design. With the advent of more accessible Virtual Reality (VR) and Augmented Reality (AR) technologies, sketching can potentially become a more powerful yet easy-to-use modality for content creation. In this state-of-the-art report, we aim to present a comprehensive overview of techniques related to sketch based content creation, both on the desktop and in VR/AR. We discuss various basic concepts related to static and dynamic content creation using sketches. We provide a structured review of various aspects of content creation including model generation, coloring and texturing, and finally animation. We try to motivate the advantages that VR/AR based sketching techniques and systems can offer into making sketch based content creation a more accessible and powerful mode of expression. We also discuss and highlight various unsolved challenges that current sketch based techniques face with the goal of encouraging future research in the domain.