Browsing by Author "Sugimoto, Maki"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Automatic Labeling of Training Data by Vowel Recognition for Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted Display(The Eurographics Association, 2019) Nakamura, Fumihiko; Suzuki, Katsuhiro; Masai, Katsutoshi; Itoh, Yuta; Sugiura, Yuta; Sugimoto, Maki; Kakehi, Yasuaki and Hiyama, AtsushiFacial expressions enrich communication via avatars. However, in common immersive virtual reality (VR) systems, facial occlusions by head-mounted displays (HMD) lead to difficulties in capturing users' faces. In particular, the mouth plays an important role in facial expressions because it is essential for rich interaction. In this paper, we propose a technique that classifies mouth shapes into six classes using optical sensors embedded in HMD and gives labels automatically to the training dataset by vowel recognition. We experiment with five subjects to compare the recognition rates of machine learning under manual and automated labeling conditions. Results show that our method achieves average classification accuracy of 99.9% and 96.3% under manual and automated labeling conditions, respectively. These findings indicate that automated labeling is competitive relative to manual labeling, although the former's classification accuracy is slightly higher than that of the latter. Furthermore, we develop an application that reflects the mouth shape on avatars. This application blends six mouth shapes and then applies the blended mouth shapes to avatars.Item Evaluating Techniques to Share Hand Gestures for Remote Collaboration using Top-Down Projection in a Virtual Environment(The Eurographics Association, 2022) Teo, Theophilus; Sakurada, Kuniharu; Fukuoka, Masaaki; Sugimoto, Maki; Hideaki Uchiyama; Jean-Marie NormandSharing hand gestures in a remote collaboration offers natural and expressive communication between collaborators. Proposed techniques allow sharing dependent (attached to something) or independent (no attachment) hand gestures in an immersive remote collaboration. However, there are research gaps for sharing hand gestures using different techniques and how it impacts user behaviour and performance. In this paper, we propose an evaluation study to compare sharing dependent and independent hand gestures. We developed a prototype, supporting three techniques of sharing hand gestures: Attached to Local, Attached to Object, and Independent Hands. Also, we use top-down projection, an easy-to-setup method to share a local user's environment with a remote user. We compared the three techniques and found that independent hands help a remote user guide a local user in an object interaction task quicker than hands attached to the local user. It also gives clearer instruction than dependent hands despite limited depth perception caused by top-down projection. A similar trend is also found in remote users' preferences.Item ICAT-EGVE 2020 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments(The Eurographics Association, 2020) Argelaguet, Ferran; McMahan, Ryan; Sugimoto, Maki; Kulik, Alexander; Sra, Misha; Kim, Kangsoo; Seo, Byung-Kuk; Argelaguet, Ferran and McMahan, Ryan and Sugimoto, MakiItem An Integrated Ducted Fan-Based Multi-Directional Force Feedback with a Head Mounted Display(The Eurographics Association, 2022) Watanabe, Koki; Nakamura, Fumihiko; Sakurada, Kuniharu; Teo, Theophilus; Sugimoto, Maki; Hideaki Uchiyama; Jean-Marie NormandAdding force feedback to virtual reality applications enhances the immersive experience. We propose a prototype, featuring head-based multi-directional force feedback in a virtual environment. We designed the prototype by integrating four ducted fans into a head-mounted display. Our technical evaluation of the ducted fan revealed the force characteristics of the ducted fan, including presentable power, sound level, and latency. In the first part of our study, we investigated the minimum force that a user can perceive in different directions (forward/backward force; up/down/left/right rotational force). The result suggested the absolute detection threshold for each directional force. Following that, we evaluated the impact of using force feedback through an immersive flight simulation in the second part of our study. The result indicates that our technique significantly improved user enjoyment, comfort, and visual-and-tactile perception, and reduced simulator sickness in an immersive flight simulation.Item Invisible Long Arm Illusion: Illusory Body Ownership by Synchronous Movement of Hands and Feet(The Eurographics Association, 2018) Kondo, Ryota; Ueda, Sachiyo; Sugimoto, Maki; Minamizawa, Kouta; Inami, Masahiko; Kitazaki, Michiteru; Bruder, Gerd and Yoshimoto, Shunsuke and Cobb, SueWe feel as if a fake body is our own body by synchronicity between the fake body and the actual body (illusory body ownership) even if the body has a different shape. In our previous study, we showed that illusory body ownership can be induced to an invisible body through the synchronous movement of just the hands and feet. In this study, we investigated whether illusory body ownership can be induced to the invisible body even when the arm length of the invisible body was different from the usual body. We modified the length of arm of a full body avatar or changed the position of the hand of the invisible body stimulus, and found that the illusory body ownership was induced to the transformed body by synchronous movement. Participants' reaching behavior gradually changed to use the longer arm more during the learning of the transformed body.Item ReallifeEngine: A Mixed Reality-Based Visual Programming System for SmartHomes(The Eurographics Association, 2019) Suzuki, Ryohei; Masai, Katsutoshi; Sugimoto, Maki; Kakehi, Yasuaki and Hiyama, AtsushiThe conveniences experienced by society have tremendously improved with the development of the Internet of Things (IoT). Among the affordances stemming from this innovation is an IoT concept called the SmartHome, which is already spreading even in general households. Despite this proliferation, however, ordinary users experience difficulty in performing the complex control and automation of IoT devices, thereby impeding their full exploitation of IoT benefits. These problems highlight the need for a system that enables general users to easily manipulate IoT devices. Correspondingly, this study constructed a visual programming system that facilitates IoT device operation. The system, which was developed on the basis of data obtained from various sensors in a SmartHome, employs mixed reality(MR) in enhancing the visualization of various data, eases the understanding of the positional relationship among devices, and smoothens the checking of execution results. We conducted an evaluation experiment wherein eight users were asked to test the proposed system, and we verified its usefulness on the basis of the time elapsed until the participants completed the programming of diverse IoT devices and a questionnaire intended to derive their subjective assessments. The result indicates that the proposed system makes it easy to understand the correspondence between the real world device and the node in the MR environment, and the connection between the sensors and the home appliances. On the other hand, it is negatively evaluated for operability.Item WeightSync: Proprioceptive and Haptic Stimulation for Virtual Physical Perception(The Eurographics Association, 2020) Teo, Theophilus; Nakamura, Fumihiko; Sugimoto, Maki; Verhulst, Adrien; Lee, Gun A.; Billinghurst, Mark; Adcock, Matt; Argelaguet, Ferran and McMahan, Ryan and Sugimoto, MakiIn virtual environments, we are able to have an augmented embodiment with various virtual avatars. Also, in physical environments, we can extend the embodiment experience using Supernumerary Robotic Limbs (SRLs) by attaching them to the body of a person. It is also important to consider for the feedback to the operator who controls the avatar (virtual) and SRLs (physical). In this work, we use a servo motor and Galvanic Vestibular Stimulation to provide feedback from a virtual interaction that simulates remotely controlling SRLs. Our technique transforms information about the virtual objects into haptic and proprioceptive feedback that provides different sensations to an operator.