Browsing by Author "Sýkora, Daniel"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Enhancing Neural Style Transfer using Patch-Based Synthesis(The Eurographics Association, 2019) Texler, Ondřej; Fišer, Jakub; Lukáč, Mike; Lu, Jingwan; Shechtman, Eli; Sýkora, Daniel; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe present a new approach to example-based style transfer which combines neural methods with patch-based synthesis to achieve compelling stylization quality even for high-resolution imagery. We take advantage of neural techniques to provide adequate stylization at the global level and use their output as a prior for subsequent patch-based synthesis at the detail level. Thanks to this combination, our method keeps the high frequencies of the original artistic media better, thereby dramatically increases the fidelity of the resulting stylized imagery. We also show how to stylize extremely large images (e.g., 340 Mpix) without the need to run the synthesis at the pixel level, yet retaining the original high-frequency details.Item Fluidymation: Stylizing Animations Using Natural Dynamics of Artistic Media(The Eurographics Association and John Wiley & Sons Ltd., 2021) Platkevic, Adam; Curtis, Cassidy; Sýkora, Daniel; Zhang, Fang-Lue and Eisemann, Elmar and Singh, KaranWe present Fluidymation-a new example-based approach to stylizing animation that employs the natural dynamics of artistic media to convey a prescribed motion. In contrast to previous stylization techniques that transfer the hand-painted appearance of a static style exemplar and then try to enforce temporal coherence, we use moving exemplars that capture the artistic medium's inherent dynamic properties, and transfer both movement and appearance to reproduce natural-looking transitions between individual animation frames. Our approach can synthetically generate stylized sequences that look as if actual paint is diffusing across a canvas in the direction and speed of the target motion.Item Real-Time Patch-Based Stylization of Portraits Using Generative Adversarial Network(The Eurographics Association, 2019) Futschik, David; Chai, Menglei; Cao, Chen; Ma, Chongyang; Stoliar, Aleksei; Korolev, Sergey; Tulyakov, Sergey; Kučera, Michal; Sýkora, Daniel; Kaplan, Craig S. and Forbes, Angus and DiVerdi, StephenWe present a learning-based style transfer algorithm for human portraits which significantly outperforms current state-of-the-art in computational overhead while still maintaining comparable visual quality. We show how to design a conditional generative adversarial network capable to reproduce the output of Fišer et al.'s patch-based method [FJS*17] that is slow to compute but can deliver state-of-the-art visual quality. Since the resulting end-to-end network can be evaluated quickly on current consumer GPUs, our solution enables first real-time high-quality style transfer to facial videos that runs at interactive frame rates. Moreover, in cases when the original algorithmic approach of Fišer et al. fails our network can provide a more visually pleasing result thanks to generalization. We demonstrate the practical utility of our approach on a variety of different styles and target subjects.Item Seamless Reconstruction of Part-Based High-Relief Models from Hand-Drawn Images(ACM, 2018) Dvorožnák, Marek; Nejad, Saman Sepehri; Jamriška, Ondřej; Jacobson, Alec; Kavan, Ladislav; Sýkora, Daniel; Aydın, Tunç and Sýkora, DanielWe present a new approach to reconstruction of high-relief models from hand-made drawings. Our method is tailored to an interactive modeling scenario where the input drawing can be separated into a set of semantically meaningful parts of which relative depth order is known beforehand. For this kind of input, our technique allows inflating individual components to have a semi-elliptical profile, position them to satisfy prescribed depth order, and provide their seamless interconnection. As compared to previous similar frameworks our approach is the first that formulates this reconstruction process as a joint non-linear optimization problem. Although its direct optimization is computationally demanding we propose an approximative solution which delivers comparable results orders of magnitude faster enabling an interactive response. We evaluate our approach on various hand-made drawings and demonstrate that it provides stateof-the-art quality in comparison with previous methods which require comparable user intervention.Item STALP: Style Transfer with Auxiliary Limited Pairing(The Eurographics Association and John Wiley & Sons Ltd., 2021) Futschik, David; Kucera, Michal; Lukác, Mike; Wang, Zhaowen; Shechtman, Eli; Sýkora, Daniel; Mitra, Niloy and Viola, IvanWe present an approach to example-based stylization of images that uses a single pair of a source image and its stylized counterpart. We demonstrate how to train an image translation network that can perform real-time semantically meaningful style transfer to a set of target images with similar content as the source image. A key added value of our approach is that it considers also consistency of target images during training. Although those have no stylized counterparts, we constrain the translation to keep the statistics of neural responses compatible with those extracted from the stylized source. In contrast to concurrent techniques that use a similar input, our approach better preserves important visual characteristics of the source style and can deliver temporally stable results without the need to explicitly handle temporal consistency. We demonstrate its practical utility on various applications including video stylization, style transfer to panoramas, faces, and 3D models.Item StyleBlit: Fast Example-Based Stylization with Local Guidance(The Eurographics Association and John Wiley & Sons Ltd., 2019) Sýkora, Daniel; Jamriška, Ondrej; Texler, Ondrej; Fišer, Jakub; Lukác, Mike; Lu, Jingwan; Shechtman, Eli; Alliez, Pierre and Pellacini, FabioWe present StyleBlit-an efficient example-based style transfer algorithm that can deliver high-quality stylized renderings in real-time on a single-core CPU. Our technique is especially suitable for style transfer applications that use local guidance - descriptive guiding channels containing large spatial variations. Local guidance encourages transfer of content from the source exemplar to the target image in a semantically meaningful way. Typical local guidance includes, e.g., normal values, texture coordinates or a displacement field. Contrary to previous style transfer techniques, our approach does not involve any computationally expensive optimization. We demonstrate that when local guidance is used, optimization-based techniques converge to solutions that can be well approximated by simple pixel-level operations. Inspired by this observation, we designed an algorithm that produces results visually similar to, if not better than, the state-of-the-art, and is several orders of magnitude faster. Our approach is suitable for scenarios with low computational budget such as games and mobile applications.Item StyleProp: Real-time Example-based Stylization of 3D Models(The Eurographics Association and John Wiley & Sons Ltd., 2020) Hauptfleisch, Filip; Texler, Ondrej; Texler, Aneta; Krivánek, Jaroslav; Sýkora, Daniel; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueWe present a novel approach to the real-time non-photorealistic rendering of 3D models in which a single hand-drawn exemplar specifies its appearance. We employ guided patch-based synthesis to achieve high visual quality as well as temporal coherence. However, unlike previous techniques that maintain consistency in one dimension (temporal domain), in our approach, multiple dimensions are taken into account to cover all degrees of freedom given by the available space of interactions (e.g., camera rotations). To enable interactive experience, we precalculate a sparse latent representation of the entire interaction space, which allows rendering of a stylized image in real-time, even on a mobile device. To the best of our knowledge, the proposed system is the first that enables interactive example-based stylization of 3D models with full temporal coherence in predefined interaction space.