Browsing by Author "Masai, Katsutoshi"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Automatic Labeling of Training Data by Vowel Recognition for Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted Display(The Eurographics Association, 2019) Nakamura, Fumihiko; Suzuki, Katsuhiro; Masai, Katsutoshi; Itoh, Yuta; Sugiura, Yuta; Sugimoto, Maki; Kakehi, Yasuaki and Hiyama, AtsushiFacial expressions enrich communication via avatars. However, in common immersive virtual reality (VR) systems, facial occlusions by head-mounted displays (HMD) lead to difficulties in capturing users' faces. In particular, the mouth plays an important role in facial expressions because it is essential for rich interaction. In this paper, we propose a technique that classifies mouth shapes into six classes using optical sensors embedded in HMD and gives labels automatically to the training dataset by vowel recognition. We experiment with five subjects to compare the recognition rates of machine learning under manual and automated labeling conditions. Results show that our method achieves average classification accuracy of 99.9% and 96.3% under manual and automated labeling conditions, respectively. These findings indicate that automated labeling is competitive relative to manual labeling, although the former's classification accuracy is slightly higher than that of the latter. Furthermore, we develop an application that reflects the mouth shape on avatars. This application blends six mouth shapes and then applies the blended mouth shapes to avatars.Item FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms(The Eurographics Association, 2019) Fukuoka, Masaaki; verhulst, adrien; Nakamura, Fumihiko; Takizawa, Ryo; Masai, Katsutoshi; Sugimoto, Maki; Kakehi, Yasuaki and Hiyama, AtsushiSupernumerary Robotic Limbs (SRLs) can make physical activities easier, but require cooperation with the operator. To improve cooperation between the SRLs and the operator, the SRLs can try to predict the operator's intentions. A way to predict the operator's intentions is to use his/her Facial Expressions (FEs). Here we investigate the mapping between FEs and Supernumerary Robotic Arms (SRAs) commands (e.g. grab, release). To measure FEs, we used a optical sensor-based approach (here inside a HMD). The sensors data are fed to a SVM able to predict FEs. The SRAs can then carry out commands by predicting the operator's FEs (and arguably, the operator's intention). We ran a data collection study (N=10) to know which FEs assign to which robotic arm commands in a Virtual reality Environment (VE). We researched the mapping patterns by (1) performing an object reaching - grasping - releasing task using ''any'' FEs; (2) analyzing sensors data and a self-reported FE questionnaire to find the most common FEs used for a given command; (3) classifying the FEs in FEs groups. We then ran another study (N=14) to find the most effective combination of FEs groups / SRAs commands by recording task completion time. As a result, we found that the optimum combinations are: (i) Eyes + Mouth for grabbing / releasing; and (ii) Mouth for extending / contracting the arms (i.e. a along the forward axis).Item ReallifeEngine: A Mixed Reality-Based Visual Programming System for SmartHomes(The Eurographics Association, 2019) Suzuki, Ryohei; Masai, Katsutoshi; Sugimoto, Maki; Kakehi, Yasuaki and Hiyama, AtsushiThe conveniences experienced by society have tremendously improved with the development of the Internet of Things (IoT). Among the affordances stemming from this innovation is an IoT concept called the SmartHome, which is already spreading even in general households. Despite this proliferation, however, ordinary users experience difficulty in performing the complex control and automation of IoT devices, thereby impeding their full exploitation of IoT benefits. These problems highlight the need for a system that enables general users to easily manipulate IoT devices. Correspondingly, this study constructed a visual programming system that facilitates IoT device operation. The system, which was developed on the basis of data obtained from various sensors in a SmartHome, employs mixed reality(MR) in enhancing the visualization of various data, eases the understanding of the positional relationship among devices, and smoothens the checking of execution results. We conducted an evaluation experiment wherein eight users were asked to test the proposed system, and we verified its usefulness on the basis of the time elapsed until the participants completed the programming of diverse IoT devices and a questionnaire intended to derive their subjective assessments. The result indicates that the proposed system makes it easy to understand the correspondence between the real world device and the node in the MR environment, and the connection between the sensors and the home appliances. On the other hand, it is negatively evaluated for operability.