Browsing by Author "Iwata, Hiroyasu"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Development of Attention Overload Virtual Reality Training System to Extend Effective Attention Resources(The Eurographics Association, 2023) Suzuki, Kouta; Iwasaki, Yukiko; Nishida, Nonoka; Tsuji, Ayumu; Kato, Fumihiro; Iwata, Hiroyasu; Abey Campbell; Claudia Krogmeier; Gareth YoungIn recent years, various proposals have been made for body extensions, such as the [third arm] and other extended limbs. These extended limbs overcome physical limitations in multitasking, but they place a heavy burden on human cognition because they require attention to multiple positions. In this study, we developed VR training that applies a moderate attentional load and raises it step by step with the aim of improving multitasking ability by extending human effective attentional resources. The results of the validation test showed a 10 20% improvement in test task performance , confirming that the amount of tasks that can be handled and the speed of response to tasks increased. In addition, it was confirmed that switching of attention became more efficient, reducing the amount of attentional resources that were being wasted.Item Stacked Dual Attention for Joint Dependency Awareness in Pose Reconstruction and Motion Prediction(The Eurographics Association, 2023) Guinot, Lena; Matsumoto, Ryutaro; Iwata, Hiroyasu; Jean-Marie Normand; Maki Sugimoto; Veronica SundstedtHuman pose reconstruction and motion prediction in real-time environments have become pivotal areas of research, especially with the burgeoning applications in Virtual and Augmented Reality (VR/AR). This paper presents a novel deep neural network underpinned by a stacked dual attention mechanism, effectively leveraging data from just 6 Inertial Measurement Units (IMUs) to reconstruct human full body poses. While previous works have predominantly focused on image-based techniques, our approach, driven by the sparsity and versatility of sensors, taps into the potential of sensor-based motion data collection. Acknowledging the challenges posed by the under-constrained nature of IMU data and the inherent limitations in available open-source datasets, we innovatively transform motion capture data into an IMU-compatible format. Through a holistic understanding of joint dependencies and temporal dynamics, our method promises enhanced accuracy in motion prediction, even in uncontrolled environments typical of everyday scenarios. Benchmarking our model against prevailing methods, we underscore the superiority of our dual attention mechanism, setting a new benchmark for real-time motion prediction using minimalistic sensor arrangements.