Browsing by Author "Shechtman, Eli"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Enhancing Neural Style Transfer using Patch-Based Synthesis(The Eurographics Association, 2019) Texler, Ondřej; Fišer, Jakub; Lukáč, Mike; Lu, Jingwan; Shechtman, Eli; Sýkora, Daniel; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe present a new approach to example-based style transfer which combines neural methods with patch-based synthesis to achieve compelling stylization quality even for high-resolution imagery. We take advantage of neural techniques to provide adequate stylization at the global level and use their output as a prior for subsequent patch-based synthesis at the detail level. Thanks to this combination, our method keeps the high frequencies of the original artistic media better, thereby dramatically increases the fidelity of the resulting stylized imagery. We also show how to stylize extremely large images (e.g., 340 Mpix) without the need to run the synthesis at the pixel level, yet retaining the original high-frequency details.Item STALP: Style Transfer with Auxiliary Limited Pairing(The Eurographics Association and John Wiley & Sons Ltd., 2021) Futschik, David; Kucera, Michal; Lukác, Mike; Wang, Zhaowen; Shechtman, Eli; Sýkora, Daniel; Mitra, Niloy and Viola, IvanWe present an approach to example-based stylization of images that uses a single pair of a source image and its stylized counterpart. We demonstrate how to train an image translation network that can perform real-time semantically meaningful style transfer to a set of target images with similar content as the source image. A key added value of our approach is that it considers also consistency of target images during training. Although those have no stylized counterparts, we constrain the translation to keep the statistics of neural responses compatible with those extracted from the stylized source. In contrast to concurrent techniques that use a similar input, our approach better preserves important visual characteristics of the source style and can deliver temporally stable results without the need to explicitly handle temporal consistency. We demonstrate its practical utility on various applications including video stylization, style transfer to panoramas, faces, and 3D models.Item State of the Art on Neural Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2020) Tewari, Ayush; Fried, Ohad; Thies, Justus; Sitzmann, Vincent; Lombardi, Stephen; Sunkavalli, Kalyan; Martin-Brualla, Ricardo; Simon, Tomas; Saragih, Jason; Nießner, Matthias; Pandey, Rohit; Fanello, Sean; Wetzstein, Gordon; Zhu, Jun-Yan; Theobalt, Christian; Agrawala, Maneesh; Shechtman, Eli; Goldman, Dan B.; Zollhöfer, Michael; Mantiuk, Rafal and Sundstedt, VeronicaEfficient rendering of photo-realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo-realistic images from hand-crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo-realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state-of-the-art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photorealistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. Specifically, our emphasis is on the type of control, i.e., how the control is provided, which parts of the pipeline are learned, explicit vs. implicit control, generalization, and stochastic vs. deterministic synthesis. The second half of this state-of-the-art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems.Item StyleBlit: Fast Example-Based Stylization with Local Guidance(The Eurographics Association and John Wiley & Sons Ltd., 2019) Sýkora, Daniel; Jamriška, Ondrej; Texler, Ondrej; Fišer, Jakub; Lukác, Mike; Lu, Jingwan; Shechtman, Eli; Alliez, Pierre and Pellacini, FabioWe present StyleBlit-an efficient example-based style transfer algorithm that can deliver high-quality stylized renderings in real-time on a single-core CPU. Our technique is especially suitable for style transfer applications that use local guidance - descriptive guiding channels containing large spatial variations. Local guidance encourages transfer of content from the source exemplar to the target image in a semantically meaningful way. Typical local guidance includes, e.g., normal values, texture coordinates or a displacement field. Contrary to previous style transfer techniques, our approach does not involve any computationally expensive optimization. We demonstrate that when local guidance is used, optimization-based techniques converge to solutions that can be well approximated by simple pixel-level operations. Inspired by this observation, we designed an algorithm that produces results visually similar to, if not better than, the state-of-the-art, and is several orders of magnitude faster. Our approach is suitable for scenarios with low computational budget such as games and mobile applications.