Browsing by Author "Pettré, Julien"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Algorithms for Microscopic Crowd Simulation: Advancements in the 2010s(The Eurographics Association and John Wiley & Sons Ltd., 2021) Toll, Wouter van; Pettré, Julien; Bühler, Katja and Rushmeier, HollyThe real-time simulation of human crowds has many applications. Simulating how the people in a crowd move through an environment is an active and ever-growing research topic. Most research focuses on microscopic (or 'agent-based') crowdsimulation methods that model the behavior of each individual person, from which collective behavior can then emerge. This state-of-the-art report analyzes how the research on microscopic crowd simulation has advanced since the year 2010. We focus on the most popular research area within the microscopic paradigm, which is local navigation, and most notably collision avoidance between agents. We discuss the four most popular categories of algorithms in this area (force-based, velocity-based, vision-based, and data-driven) that have either emerged or grown in the last decade. We also analyze the conceptual and computational (dis)advantages of each category. Next, we extend the discussion to other types of behavior or navigation (such as group behavior and the combination with path planning), and we review work on evaluating the quality of simulations. Based on the observed advancements in the 2010s, we conclude by predicting how the research area of microscopic crowd simulation will evolve in the future. Overall, we expect a significant growth in the area of data-driven and learning-based agent navigation, and we expect an increasing number of methods that re-group multiple 'levels' of behavior into one principle. Furthermore, we observe a clear need for new ways to analyze (real or simulated) crowd behavior, which is important for quantifying the realism of a simulation and for choosing the right algorithms at the right time.Item Authoring Virtual Crowds: A Survey(The Eurographics Association and John Wiley & Sons Ltd., 2022) Lemonari, Marilena; Blanco, Rafael; Charalambous, Panayiotis; Pelechano, Nuria; Avraamides, Marios; Pettré, Julien; Chrysanthou, Yiorgos; Meneveaux, Daniel; Patanè, GiuseppeRecent advancements in crowd simulation unravel a wide range of functionalities for virtual agents, delivering highly-realistic, natural virtual crowds. Such systems are of particular importance to a variety of applications in fields such as: entertainment (e.g., movies, computer games); architectural and urban planning; and simulations for sports and training. However, providing their capabilities to untrained users necessitates the development of authoring frameworks. Authoring virtual crowds is a complex and multi-level task, varying from assuming control and assisting users to realise their creative intents, to delivering intuitive and easy to use interfaces, facilitating such control. In this paper, we present a categorisation of the authorable crowd simulation components, ranging from high-level behaviours and path-planning to local movements, as well as animation and visualisation. We provide a review of the most relevant methods in each area, emphasising the amount and nature of influence that the users have over the final result. Moreover, we discuss the currently available authoring tools (e.g., graphical user interfaces, drag-and-drop), identifying the trends of early and recent work. Finally, we suggest promising directions for future research that mainly stem from the rise of learning-based methods, and the need for a unified authoring framework.Item A Survey on Reinforcement Learning Methods in Character Animation(The Eurographics Association and John Wiley & Sons Ltd., 2022) Kwiatkowski, Ariel; Alvarado, Eduardo; Kalogeiton, Vicky; Liu, C. Karen; Pettré, Julien; Panne, Michiel van de; Cani, Marie-Paule; Meneveaux, Daniel; Patanè, GiuseppeReinforcement Learning is an area of Machine Learning focused on how agents can be trained to make sequential decisions, and achieve a particular goal within an arbitrary environment. While learning, they repeatedly take actions based on their observation of the environment, and receive appropriate rewards which define the objective. This experience is then used to progressively improve the policy controlling the agent's behavior, typically represented by a neural network. This trained module can then be reused for similar problems, which makes this approach promising for the animation of autonomous, yet reactive characters in simulators, video games or virtual reality environments. This paper surveys the modern Deep Reinforcement Learning methods and discusses their possible applications in Character Animation, from skeletal control of a single, physically-based character to navigation controllers for individual agents and virtual crowds. It also describes the practical side of training DRL systems, comparing the different frameworks available to build such agents.