Browsing by Author "Giachetti, A."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Online Gesture Recognition(The Eurographics Association, 2019) Caputo, F. M.; Burato, S.; Pavan, G.; Voillemin, T.; Wannous, H.; Vandeborre, J. P.; Maghoumi, M.; Taranta II, E. M.; Razmjoo, A.; LaViola Jr., J. J.; Manganaro, F.; Pini, S.; Borghi, G.; Vezzani, R.; Cucchiara, R.; Nguyen, H.; Tran, M. T.; Giachetti, A.; Biasotti, Silvia and Lavoué, Guillaume and Veltkamp, RemcoThis paper presents the results of the Eurographics 2019 SHape Retrieval Contest track on online gesture recognition. The goal of this contest was to test state-of-the-art methods that can be used to online detect command gestures from hands' movements tracking on a basic benchmark where simple gestures are performed interleaving them with other actions. Unlike previous contests and benchmarks on trajectory-based gesture recognition, we proposed an online gesture recognition task, not providing pre-segmented gestures, but asking the participants to find gestures within recorded trajectories. The results submitted by the participants show that an online detection and recognition of sets of very simple gestures from 3D trajectories captured with a cheap sensor can be effectively performed. The best methods proposed could be, therefore, directly exploited to design effective gesture-based interfaces to be used in different contexts, from Virtual and Mixed reality applications to the remote control of home devices.Item A Survey on 3D Virtual Object Manipulation: From the Desktop to Immersive Virtual Environments(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Mendes, D.; Caputo, F. M.; Giachetti, A.; Ferreira, A.; Jorge, J.; Chen, Min and Benes, BedrichInteractions within virtual environments often require manipulating 3D virtual objects. To this end, researchers have endeavoured to find efficient solutions using either traditional input devices or focusing on different input modalities, such as touch and mid‐air gestures. Different virtual environments and diverse input modalities present specific issues to control object position, orientation and scaling: traditional mouse input, for example, presents non‐trivial challenges because of the need to map between 2D input and 3D actions. While interactive surfaces enable more natural approaches, they still require smart mappings. Mid‐air gestures can be exploited to offer natural manipulations mimicking interactions with physical objects. However, these approaches often lack precision and control. All these issues and many others have been addressed in a large body of work. In this article, we survey the state‐of‐the‐art in 3D object manipulation, ranging from traditional desktop approaches to touch and mid‐air interfaces, to interact in diverse virtual environments. We propose a new taxonomy to better classify manipulation properties. Using our taxonomy, we discuss the techniques presented in the surveyed literature, highlighting trends, guidelines and open challenges, that can be useful both to future research and to developers of 3D user interfaces.