Browsing by Author "Benes, Bedrich"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Authoring Terrains with Spatialised Style(The Eurographics Association and John Wiley & Sons Ltd., 2023) Perche, Simon; Peytavie, Adrien; Benes, Bedrich; Galin, Eric; Guérin, Eric; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Various terrain modelling methods have been proposed for the past decades, providing efficient and often interactive authoring tools. However, they seldom include any notion of style, which is critical for designers in the entertainment industry. We introduce a new generative network method that bridges the gap between automatic terrain synthesis and authoring, providing a versatile set of authoring tools allowing spatialised style. We build upon the StyleGAN2 architecture and extend it with authoring tools. Given an input sketch or existing elevation map, our method generates a terrain with features that can be authored, enhanced, and augmented using interactive brushes and style manipulation tools. The strength of our approach lies in the versatility and interoperability of the different tools. We validate our method quantitatively with drainage calculation against other previous techniques and qualitatively by asking users to follow a prompt or freely create a terrain.Item Editorial(© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2021) Hauser, Helwig; Benes, Bedrich; Benes, Bedrich and Hauser, HelwigItem Procedural Riverscapes(The Eurographics Association and John Wiley & Sons Ltd., 2019) Peytavie, Adrien; Dupont, Thibault; Guérin, Eric; Cortial, Yann; Benes, Bedrich; Gain, James; Galin, Eric; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonThis paper addresses the problem of creating animated riverscapes through a novel procedural framework that generates the inscribing geometry of a river network and then synthesizes matching real-time water movement animation. Our approach takes bare-earth heightfields as input, derives hydrologically-inspired river network trajectories, carves riverbeds into the terrain, and then automatically generates a corresponding blend-flow tree for the water surface. Characteristics, such as the riverbed width, depth and shape, as well as elevation and flow of the fluid surface, are procedurally derived from the terrain and river type. The riverbed is inscribed by combining compactly supported elevation modifiers over the river course. Subsequently, the water surface is defined as a time-varying continuous function encoded as a blend-flow tree with leaves that are parameterized procedural flow primitives and internal nodes that are blend operators. While river generation is fully automated, we also incorporate intuitive interactive editing of both river trajectories and individual riverbed and flow primitives. The resulting framework enables the generation of a wide range of river forms, ranging from slow meandering rivers to rapids with churning water, including surface effects, such as foam and leaves carried downstream.Item Semi-Procedural Textures Using Point Process Texture Basis Functions(The Eurographics Association and John Wiley & Sons Ltd., 2020) Guehl, Pascal; Allègre, Remi; Dischler, Jean-Michel; Benes, Bedrich; Galin, Eric; Dachsbacher, Carsten and Pharr, MattWe introduce a novel semi-procedural approach that avoids drawbacks of procedural textures and leverages advantages of datadriven texture synthesis. We split synthesis in two parts: 1) structure synthesis, based on a procedural parametric model and 2) color details synthesis, being data-driven. The procedural model consists of a generic Point Process Texture Basis Function (PPTBF), which extends sparse convolution noises by defining rich convolution kernels. They consist of a window function multiplied with a correlated statistical mixture of Gabor functions, both designed to encapsulate a large span of common spatial stochastic structures, including cells, cracks, grains, scratches, spots, stains, and waves. Parameters can be prescribed automatically by supplying binary structure exemplars. As for noise-based Gaussian textures, the PPTBF is used as stand-alone function, avoiding classification tasks that occur when handling multiple procedural assets. Because the PPTBF is based on a single set of parameters it allows for continuous transitions between different visual structures and an easy control over its visual characteristics. Color is consistently synthesized from the exemplar using a multiscale parallel texture synthesis by numbers, constrained by the PPTBF. The generated textures are parametric, infinite and avoid repetition. The data-driven part is automatic and guarantees strong visual resemblance with inputs.Item Sketching Vocabulary for Crowd Motion(The Eurographics Association and John Wiley & Sons Ltd., 2022) Mathew, C. D. Tharindu; Benes, Bedrich; Aliaga, Daniel; Dominik L. Michels; Soeren PirkThis paper proposes and evaluates a sketching language to author crowd motion. It focuses on the path, speed, thickness, and density parameters of crowd motion. A sketch-based vocabulary is proposed for each parameter and evaluated in a user study against complex crowd scenes. A sketch recognition pipeline converts the sketches into a crowd simulation. The user study results show that 1) participants at various skill levels and can draw accurate crowd motion through sketching, 2) certain sketch styles lead to a more accurate representation of crowd parameters, and 3) sketching allows to produce complex crowd motions in a few seconds. The results show that some styles although accurate actually are less preferred over less accurate ones.Item Towards Immersive Visualization for Large Lectures: Opportunities, Challenges, and Possible Solutions(The Eurographics Association, 2023) Popescu, Voicu; Magana, Alejandra J.; Benes, Bedrich; Magana, Alejandra; Zara, JiriIn this position paper, we discuss deploying immersive visualization in large lectures (IVLL). We take the position that IVLL has great potential to benefit students and that, thanks to the current advances in computer hardware and software, IVLL implementation is now possible. We argue that IVLL is best done using mixed reality (MR) headsets, which, compared to virtual reality (VR) headsets, have the advantages of allowing students to see important elements of the real world and avoiding cybersickness. We argue that immersive visualization can be beneficial at any point on the student engagement continuum. We argue that immersive visualization allows reconfiguring large lectures dynamically, partitioning the class with great flexibility in groups of students of various sizes, or accommodating 3D visualizations of monumental size. We inventory the challenges that have to be overcome to implement IVLL, and we argue that they currently have acceptable solutions, opening the door to developing a first IVLL system.