ICAT-EGVE2014
Permanent URI for this collection
Browse
Browsing ICAT-EGVE2014 by Subject "and virtual realities"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Interpretation of Tactile Sensation using an Anthropomorphic Finger Motion Interface to Operate a Virtual Avatar(The Eurographics Association, 2014) Ujitoko, Yusuke; Hirota, Koichi; Takuya Nojima and Dirk Reiners and Oliver StaadtThe objective of the system presented in this paper is to give users tactile feedback while walking in a virtual world through an anthropomorphic finger motion interface. We determined that the synchrony between the first person perspective and proprioceptive information together with the motor activity of the user's fingers are able to induce an illusionary feeling that is equivalent to the sense of ownership of the invisible avatar's legs. Under this condition, the perception of the ground under the virtual avatar's foot is felt through the user's fingertip. The experiments indicated that using our method the scale of the tactile perception of the texture roughness was extended and that the enlargement ratio was proportional to the avatar's body (foot) size. In order to display the target tactile perception to the users, we have to control only the virtual avatar's body (foot) size and the roughness of the tactile texture. Our results suggest that in terms of tactile perception fingers can be a replacement for legs in locomotion interfaces.Item Investigation of Dynamic View Expansion for Head-Mounted Displays with Head Tracking in Virtual Environments(The Eurographics Association, 2014) Yano, Yuki; Kiyokawa, Kiyoshi; Sherstyuk, Andrei; Mashita, T.; Takemura, H.; Takuya Nojima and Dirk Reiners and Oliver StaadtHead mounted displays (HMD) are widely used for visual immersion in virtual reality (VR) systems. It is acknowledged that the narrow field of view (FOV) for most HMD models is the leading cause of insufficient quality of immersion, resulting in suboptimal user performance in various tasks in VR and early fatigue, too. Proposed solutions to this problem range from hardware-based approaches to software enhancements of the viewing process. There exist three major techniques of view expansion; minification or rendering graphics with a larger FOV than the display's FOV, motion amplification or amplifying user head rotation aiming to provide accelerated access to peripheral vision during wide sweeping head movements, and diverging left and right virtual cameras outwards in order to increase the combined binocular FOV. Static view expansion has been reported to increase user efficiency in search and navigation tasks, however the effectiveness of dynamic view expansion is not yet well understood. When applied, view expansion techniques modify the natural viewing process and alter familiar user reflex-response loops, which may result in motion sickness and poor user performance. Thus, it is vital to evaluate dynamic view expansion techniques in terms of task effectiveness and user workload. This paper details dynamic view expansion techniques, experimental settings and findings of the user study. In the user study, we investigate three view expansion techniques, applying them dynamically based on user behaviors. We evaluate the effectiveness of these methods quantitatively, by measuring and comparing user performance and user workload in a target search task. Also, we collect and compare qualitative feedback from the subjects in the experiment. Experimental results show that certain levels of minification and motion amplification increase performance by 8.2% and 6.0%, respectively, with comparable or even decreased subjective workload.Item Short Paper: A Video Self-avatar Influences the Perception of Heights in an Augmented Reality Oculus Rift(The Eurographics Association, 2014) Gutekunst, Matthias; Geuss, Michael; Rauhoeft, Greg; Stefanucci, Jeanine; Kloos, Uwe; Mohler, Betty; Takuya Nojima and Dirk Reiners and Oliver StaadtThis paper compares the influence a video self-avatar and a lack of a visual representation of a body have on height estimation when standing at a virtual visual cliff. A height estimation experiment was conducted using a custom augmented reality Oculus Rift hardware and software prototype also described in this paper. The results show a consistency with previous research demonstrating that the presence of a visual body influences height estimates, just as it has been shown to influence distance estimates and affordance estimates.Item Space-Time Maps for Virtual Environments(The Eurographics Association, 2014) Sherstyuk, Andrei; Treskunov, Anton; Takuya Nojima and Dirk Reiners and Oliver StaadtTerrain image maps are widely used in 3D Virtual Environments, including games, online social worlds, and Virtual Reality systems, for controlling elevation of ground-bound travelers and other moving objects. By making use of all available color channels in the terrain image, it is possible to encode important information related to travel, such as presence of obstacles, directly into the image. This information can be retrieved in real time, for collision detection and avoidance, at flat cost of accessing pixel values from the image memory. We take this idea of overloading terrain maps even further and introduce time maps, where pixels can also define the rate of time, for each player at given location. In this concept work, we present a general mechanism of encoding the rate of time into a terrain image and discuss a number of applications that may benefit from making time rate location specific. Also, we offer some insights how such space-time maps can be integrated into existing game engines.Item Successive Wide Viewing Angle Appearance Manipulation with Dual Projector Camera Systems(The Eurographics Association, 2014) Amano, Toshiyuki; Shimana, Isao; Ushida, Shun; Kono, Kunioki; Takuya Nojima and Dirk Reiners and Oliver StaadtIn this study, we investigated the use of successive omnidirectional appearance manipulation for the cooperative control of multiple projector camera systems. This type of system comprises several surrounding projector camera units, where each unit projects illumination independently onto a different aspect of a target object based on feedback from the projector cameras. Thus, the system can facilitate appearance manipulation from any viewpoint in the surrounding area. An advantage of this system is that it does not require information sharing or a geometrical model. However, this approach is problematic because the stability of the total control system cannot be guaranteed even if the feedback system of each projector camera is stable. Therefore, we simulated the feedback from the cooperative projector camera system to evaluate its stability. Based on hardware experiments, we confirmed the stability of omnidirectional appearance manipulation using two projector camera units in an interference condition. The results showed that the object's appearance could be manipulated throughout approximately 296 degrees of the total circumference of the target object.