EGVE: Eurographics Workshop on Virtual Environments
Permanent URI for this community
Browse
Browsing EGVE: Eurographics Workshop on Virtual Environments by Subject "and virtual realities"
Now showing 1 - 20 of 27
Results Per Page
Sort Options
Item 3D User Interfaces Using Tracked Multi-touch Mobile Devices(The Eurographics Association, 2012) Wilkes, Curtis B.; Tilden, Dan; Bowman, Doug A.; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David RobertsMulti-touch mobile devices are becoming ubiquitous due to the proliferation of smart phone platforms such as the iPhone and Android. Recent research has explored the use of multi-touch input for 3D user interfaces on displays including large touch screens, tablets, and mobile devices. This research explores the benefits of adding six-degree-of-freedom tracking to a multi-touch mobile device for 3D interaction. We analyze and propose benefits of using tracked multi-touch mobile devices (TMMDs) with the goal of developing effective interaction techniques to handle a variety of tasks within immersive 3D user interfaces. We developed several techniques using TMMDs for virtual object manipulation, and compared our techniques to existing best-practice techniques in a series of user studies. We did not, however, find performance advantages for TMMD-based techniques. We discuss our observations and propose alternate interaction techniques and tasks that may benefit from TMMDs.Item An Augmented Reality and Virtual Reality Pillar for Exhibitions: A Subjective Exploration(The Eurographics Association, 2017) See, Zi Siang; Sunar, Mohd Shahrizal; Billinghurst, Mark; Dey, Arindam; Santano, Delas; Esmaeili, Human; Thwaites, Harold; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiThis paper presents the development of an Augmented Reality (AR) and Virtual Reality (AR) pillar, a novel approach for showing AR and VR content in a public setting. A pillar in a public exhibition venue was converted to a four-sided AR and VR showcase, and a cultural heritage exhibit of ''Boatbuilders of Pangkor'' was shown. Multimedia tablets and mobile AR head-mountdisplays (HMDs) were provided for visitors to experience multisensory AR and VR content demonstrated on the pillar. The content included AR-based videos, maps, images and text, and VR experiences that allowed visitors to view reconstructed 3D subjects and remote locations in a 360° virtual environment. In this paper, we describe the prototype system, a user evaluation study and directions for future work.Item Comparing Auditory and Haptic Feedback for a Virtual Drilling Task(The Eurographics Association, 2012) Rausch, Dominik; Aspöck, Lukas; Knott, Thomas; Pelzer, Sönke; Vorländer, Michael; Kuhlen, Torsten; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David RobertsWhile visual feedback is dominant in Virtual Environments, the use of other modalities like haptics and acoustics can enhance believability, immersion, and interaction performance. Haptic feedback is especially helpful for many interaction tasks like working with medical or precision tools. However, unlike visual and auditory feedback, haptic reproduction is often difficult to achieve due to hardware limitations. This article describes a user study to examine how auditory feedback can be used to substitute haptic feedback when interacting with a vibrating tool. Participants remove some target material with a round-headed drill while avoiding damage to the underlying surface. In the experiment, varying combinations of surface force feedback, vibration feedback, and auditory feedback are used. We describe the design of the user study and present the results, which show that auditory feedback can compensate the lack of haptic feedback.Item Development of Mutual Telexistence System using Virtual Projection of Operator's Egocentric Body Images(The Eurographics Association, 2015) Saraiji, MHD Yamen; Fernando, Charith Lasantha; Minamizawa, Kouta; Tachi, Susumu; Masataka Imura and Pablo Figueroa and Betty MohlerIn this paper, a mobile telexistence system that provides mutual embodiment of user's body in a remote place is discussed. A fully mobile slave robot was designed and developed to deliver visual and motion mapping with user's head and body. The user can access the robot remotely using a Head Mounted Display (HMD) and set of head trackers. This system addresses three main points that are as follows: User's body representation in a remote physical environment, preserving body ownership toward the user during teleoperation, and presenting user's body interactions and visuals into the remote side. These previous three points were addressed using virtual projection of user's body into the egocentric local view, and projecting body visuals remotely. This system is intended to be used for teleconferencing and remote social activities when no physical manipulation is required.Item Dynamic View Expansion for Improving Visual Search in Video See-through AR(The Eurographics Association, 2016) Yano, Yuki; Orlosky, Jason; Kiyokawa, Kiyoshi; Takemura, Haruo; Dirk Reiners and Daisuke Iwai and Frank SteinickeThe extension or expansion of human vision is often accomplished with video see-through head mounted displays (HMDs) because of their clarity and ability to modulate background information. However, little is known about how we should control these augmentations, and continuous augmentation can have negative consequences such as distorted motion perception. To address these problems, we propose a dynamic view expansion system that modulates vergence, translation, or scale of video see-through cameras to give users on-demand peripheral vision enhancement. Unlike other methods that modify a user’s direct field of view, we take advantage of ultrawide fisheye lenses to provide access to peripheral information that would not otherwise be available. In a series of experiments testing our prototype in real world search, identification, and matching tasks, we test these expansion methods and evaluate both user performance and subjective measures such as fatigue and simulation sickness. Results show that less head movement is required with dynamic view expansion, but performance varies with application.Item Global Landmarks Do Not Necessarily Improve Spatial Performance in Addition to Bodily Self-Movement Cues when Learning a Large-Scale Virtual Environment(The Eurographics Association, 2015) Meilinger, Tobias; Schulte-Pelkum, Jörg; Frankenstein, Julia; Berger, Daniel; Bülthoff, Heinrich H.; Masataka Imura and Pablo Figueroa and Betty MohlerComparing spatial performance in different virtual reality setups can indicate which cues are relevant for a realistic virtual experience. Bodily self-movement cues and global orientation information were shown to increase spatial performance compared with local visual cues only. We tested the combined impact of bodily and global orientation cues by having participants learn a virtual multi corridor environment either by only walking through it, with additional distant landmarks providing heading information, or with a surrounding hall relative to which participants could determine their orientation and location. Subsequent measures on spatial memory only revealed small and non-reliable differences between the learning conditions. We conclude that additional global landmark information does not necessarily improve user's orientation within a virtual environment when bodily-self-movement cues are available.Item Indoor Tracking for Large Area Industrial Mixed Reality(The Eurographics Association, 2012) Scheer, Fabian; Müller, Stefan; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David RobertsFor mixed reality (MR) applications the tracking of a video camera in a rapidly changing large environment with several hundred square meters still represents a challenging task. In contrast to an installation in a laboratory, industrial scenarios like a running factory, require minimal setup, calibration or training times of a tracking system and merely minimal changes of the environment. This paper presents a tracking system to compute the pose of a video camera mounted on a mobile carriage like device in very large indoor environments, consisting of several hundred square meters. The carriage is equipped with a touch sensitive monitor to display a live augmentation. The tracking system is based on an infrared laser device, that detects at least three out of a few retroreflective targets in the environment and compares actual target measurements with a precalibrated 2D target map. The device passes a 2D position and orientation. To obtain a six degree of freedom (DOF) pose a coordinate system adjustment method is presented, that determines the transformation between the 2D laser tracker and the image sensor of a camera. To analyse the different error sources leading to the overall error the accuracy of the system is evaluated in a controlled laboratory setup. Beyond that, an evaluation of the system in a large factory building is shown, as well as the application of the system for industrial MR discrepancy checks of complete factory buildings. Finally, the utility of the 2D scanning capabilities of the laser in conjuction with a virtually generated 2D map of the 3D model of a factory is demonstrated for MR discrepancy checks.Item Influence of Path Complexity on Spatial Overlap Perception in Virtual Environments(The Eurographics Association, 2015) Vasylevska, Khrystyna; Kaufmann, Hannes; Masataka Imura and Pablo Figueroa and Betty MohlerReal walking in large virtual indoor environments within a limited real world workspace requires effective spatial compression methods. These methods should be unnoticed by the user. Scene manipulation that creates overlapping spaces has been suggested in recent work. However, there is little research focusing on users' perception of over-lapping spaces depending on the layout of the environment. In this paper we investigate how the complexity of the path influences the perception of the overlapping spaces it connects. We compare three spatial virtual layouts with paths that differ in complexity (length and number of turns). Our results suggest that an increase of the path's length is less efficient in decreasing overlap detection than a combination of length and additional turns. Furthermore, combination of paths that differ in complexity influences the distance perception within overlapping spaces.Item Interpretation of Tactile Sensation using an Anthropomorphic Finger Motion Interface to Operate a Virtual Avatar(The Eurographics Association, 2014) Ujitoko, Yusuke; Hirota, Koichi; Takuya Nojima and Dirk Reiners and Oliver StaadtThe objective of the system presented in this paper is to give users tactile feedback while walking in a virtual world through an anthropomorphic finger motion interface. We determined that the synchrony between the first person perspective and proprioceptive information together with the motor activity of the user's fingers are able to induce an illusionary feeling that is equivalent to the sense of ownership of the invisible avatar's legs. Under this condition, the perception of the ground under the virtual avatar's foot is felt through the user's fingertip. The experiments indicated that using our method the scale of the tactile perception of the texture roughness was extended and that the enlargement ratio was proportional to the avatar's body (foot) size. In order to display the target tactile perception to the users, we have to control only the virtual avatar's body (foot) size and the roughness of the tactile texture. Our results suggest that in terms of tactile perception fingers can be a replacement for legs in locomotion interfaces.Item Investigation of Dynamic View Expansion for Head-Mounted Displays with Head Tracking in Virtual Environments(The Eurographics Association, 2014) Yano, Yuki; Kiyokawa, Kiyoshi; Sherstyuk, Andrei; Mashita, T.; Takemura, H.; Takuya Nojima and Dirk Reiners and Oliver StaadtHead mounted displays (HMD) are widely used for visual immersion in virtual reality (VR) systems. It is acknowledged that the narrow field of view (FOV) for most HMD models is the leading cause of insufficient quality of immersion, resulting in suboptimal user performance in various tasks in VR and early fatigue, too. Proposed solutions to this problem range from hardware-based approaches to software enhancements of the viewing process. There exist three major techniques of view expansion; minification or rendering graphics with a larger FOV than the display's FOV, motion amplification or amplifying user head rotation aiming to provide accelerated access to peripheral vision during wide sweeping head movements, and diverging left and right virtual cameras outwards in order to increase the combined binocular FOV. Static view expansion has been reported to increase user efficiency in search and navigation tasks, however the effectiveness of dynamic view expansion is not yet well understood. When applied, view expansion techniques modify the natural viewing process and alter familiar user reflex-response loops, which may result in motion sickness and poor user performance. Thus, it is vital to evaluate dynamic view expansion techniques in terms of task effectiveness and user workload. This paper details dynamic view expansion techniques, experimental settings and findings of the user study. In the user study, we investigate three view expansion techniques, applying them dynamically based on user behaviors. We evaluate the effectiveness of these methods quantitatively, by measuring and comparing user performance and user workload in a target search task. Also, we collect and compare qualitative feedback from the subjects in the experiment. Experimental results show that certain levels of minification and motion amplification increase performance by 8.2% and 6.0%, respectively, with comparable or even decreased subjective workload.Item Modifying an Identified Size of Objects Handled with Two Fingers Using Pseudo-Haptic Effects(The Eurographics Association, 2012) Ban, Yuki; Narumi, Takuji; Tanikawa, Tomohiro; Hirose, Michitaka; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David RobertsIn our research, we aim to construct a visuo-haptic system that employs pseudo-haptic effects to provide users with the sensation of touching virtual objects of varying shapes. Thus far, we have proved that it can be possible to modify an identified curved surface shapes or angle of edges by displacing the visual representation of the user's hand. However, this method has some limitations in that we can not adapt the way of touching with two or more fingers by visually displacing the user's hand. To solve this problem, we need to not only displace the visual representation of the user's hand but also deform it. Hence, in this paper, we focus on modifying the identification of the size of objects handled with two fingers. This was achieved by deforming the visual representation of the user's hand in order to construct a novel visuo-haptic system. We devised a video see-through system, which enables us to change the perception of the shape of an object that a user is visually touching. The visual representation of the user's hand is deformed as if the user were handling a visual object, when in actuality the user is handling an object of another size. Using this system we performed an experiment to investigate the effects of visuo-haptic interaction and evaluated its effectiveness. The result showed that the perceived size of objects handled with a thumb and other finger(s) could be modified if the difference between the size of physical and visual stimuli was in the range from -40% to 35%. This indicates that our method can be applied to visuo-haptic shape display system that we proposed.Item MR Work Supporting System Using Pepper's Ghost(The Eurographics Association, 2016) Tsuruzoe, Hiroto; Odera, Satoru; Shigeno, Hiroshi; Okada, Ken-ichi; Dirk Reiners and Daisuke Iwai and Frank SteinickeRecently MR (Mixed Reality) techniques are used in many fields, and one of them is work support. Work support using MR techniques can display work instructions directly in the work space and help user to work effectively especially in assembling tasks. MR work support for assembling tasks often uses HMD (Head Mounted Display) to construct MR environment. However, there are some problems for the use of HMD, such as a burden to the head, a narrow view and a motion picture sickness. One of the techniques to solve such problems is pepper’s ghost which is an optical illusion using a glass. In this paper, we propose the naked eye MR work supporting system for assembling tasks using pepper’s ghost. This system enables a beginner to assemble some blocks into one object by the naked eye with a few burdens.Item On the Analysis of Acoustic Distance Perception in a Head Mounted Display(The Eurographics Association, 2017) Dollack, Felix; Imbery, Christina; Bitzer, Jörg; Robert W. Lindeman and Gerd Bruder and Daisuke IwaiRecent work has shown that distance perception in virtual reality is different from reality. Several studies have tried to quantify the discrepancy between virtual and real visual distance perception but only little work was done on how visual stimuli affect acoustic distance perception in virtual environments. The present study investigates how a visual stimulus effects acoustic distance perception in virtual environments. Virtual sound sources based on binaural room impulse response (BRIR) measurements made from distances ranging from 0.9 to 4.9 m in a lecture room were used as auditory stimuli. Visual stimulation was done using a head mounted display (HMD). Participants were asked to estimate egocentric distance to the sound source in two conditions: auditory with GUI (A), auditory with HMD (A+V). Each condition was presented within its own block to a total of eight participants. We found that a systematical offset is introduced by the visual stimulus.Item Passive Arm Swing Motion for Virtual Walking Sensation(The Eurographics Association, 2016) Saka, Naoyuki; Ikei, Yasushi; Amemiya, Tomohiro; Hirota, Koichi; Kitazaki, Michiteru; Dirk Reiners and Daisuke Iwai and Frank SteinickeThe present paper describes the characteristics of an arm swing display as a part of the multisensory display for creation of walking sensation to the user who is sitting on a vestibular display (a motion chair). The passive arm swing by the display was evaluated regarding the sensation of walking. About 20 % smaller (from 25 to 35 degree) passive swing angle than a real walking motion could effectively enhanced the sensation of walking when displayed as a single modality stimulus for a walking of 1.4 s period. The flexion/extension ratio was shifted forward from the real walk. The optimal swing obtained by the method of adjustment showed the same characteristics. The sensation of walking was markedly increased when both of the passive arm swing and the vestibular stimulus were synchronously presented. The active arm swing raised less walking sensation than the passive arm swing, which might be ascribed to original passiveness of the arm swing during real walking.Item Personalized Animatable Avatars from Depth Data(The Eurographics Association, 2013) Mashalkar, Jai; Bagwe, Niket; Chaudhuri, Parag; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver StaadtAbstract We present a method to create virtual character models of real users from noisy depth data. We use a combination of four depth sensors to capture a point cloud model of the person. Direct meshing of this data often creates meshes with topology that is unsuitable for proper character animation. We develop our mesh model by fitting a single template mesh to the point cloud in a two-stage process. The first stage fitting involves piecewise smooth deformation of the mesh, whereas the second stage does a finer fit using an iterative Laplacian framework. We complete the model by adding properly aligned and blended textures to the final mesh and show that it can be easily animated using motion data from a single depth camera. Our process maintains the topology of the original mesh and the proportions of the final mesh match the proportions of the actual user, thus validating the accuracy of the process. Other than the depth sensor, the process does not require any specialized hardware for creating the mesh. It is efficient, robust and is mostly automatic.Item Physical Space Requirements for Redirected Walking: How Size and Shape Affect Performance(The Eurographics Association, 2015) Azmandian, Mahdi; Grechkin, Timofey; Bolas, Mark; Suma, Evan; Masataka Imura and Pablo Figueroa and Betty MohlerRedirected walking provides a compelling solution to explore large virtual environments in a natural way. However, research literature provides few guidelines regarding trade-offs involved in selecting size and layout for physical tracked space. We designed a rigorously controlled benchmarking framework and conducted two simulated user experiments to systematically investigate how the total area and dimensions of the tracked space affect performance of steer-to-center and steer-to-orbit algorithms. The results indicate that minimum viable size of physical tracked space for these redirected walking algorithms is approximately 6m 6m with performance continuously improving in larger tracked spaces. At the same time, no ''optimal'' tracked space size can guarantee the absence of contacts with the boundary. We also found that square tracked spaces enabled best overall performance with steer-to-center algorithm also performing well in moderately elongated rectangular spaces. Finally, we demonstrate that introducing translation gains can provide a useful boost in performance, particularly when physical space is constrained. We conclude with the discussion of potential applications of our benchmarking toolkit to other problems related to performance of redirected walking platforms.Item Positioning of Subtitles in Cinematic Virtual Reality(The Eurographics Association, 2018) Rothe, Sylvia; Tran, Kim; Hussmann, Heinrich; Bruder, Gerd and Yoshimoto, Shunsuke and Cobb, SueCinematic Virtual Reality has been increasing in popularity in recent years.Watching 360 degree movies with a head mounted display, the viewer can freely choose the direction of view and thus the visible section of the movie. Therefore, a new approach for the placements of subtitles is needed. In a preliminary study we compared several static methods, where the position of the subtitles is not influenced by the movie content. The preferred method was used in the main study to compare it with dynamic, worldreferenced subtitling, where the subtitles are placed in the movie world. The position of the subtitles depends on the scene and is close to the speaking person. Even if the participants did not prefer one of these methods in general, for some cases in our experiments world-referenced subtitles led to a higher score of presence, less sickness and lower workload.Item R-V Dynamics Illusion: Psychophysical Phenomenon Caused by the Difference between Dynamics of Real Object and Virtual Object(The Eurographics Association, 2015) Kataoka, Yuta; Hashiguchi, Satoshi; Shibata, Fumihisa; Kimura, Asako; Masataka Imura and Pablo Figueroa and Betty MohlerIn Mixed-Reality (MR) space, it appeared that the sense of weight can be affected by a MR visual stimulation with a movable portion. We named this psychophysical influence caused by the difference between dynamics of the real object (R) and the virtual object (V) movement, the ''R-V Dynamics Illusion.'' There are many combinations of ex-periments that can be conducted. Previously, we conducted experiments of the case where the real object is rigid and the virtual object is dynamically changeable. In this paper, we conducted experiments of the case where the re-al object is liquid and both the real and the virtual objects are dynamically changeable. The results of the experi-ments showed that the subjects sensed weight differently when virtual object with a movable portion is superim-posed onto a real liquid object.Item Real-Time 3D Peripheral View Analysis(The Eurographics Association, 2016) Moniri, Mohammad Mehdi; Luxenburger, Andreas; Schuffert, Winfried; Sonntag, Daniel; Dirk Reiners and Daisuke Iwai and Frank SteinickeHuman peripheral vision suffers from several limitations that differ among various regions of the visual field. Since these limitations result in natural visual impairments, many interesting intelligent user interfaces based on eye tracking could benefit from peripheral view calculations that aim to compensate for events occurring outside the very center of gaze. We present a general peripheral view calculation model which extends previous work on attention-based user interfaces that use eye gaze. An intuitive, two dimensional visibility measure based on the concept of solid angle is developed for determining to which extent an object of interest observed by a user intersects with each region of the underlying visual field model. The results are weighted considering the visual acuity in each visual field region to determine the total visibility of the object. We exemplify the proposed model in a virtual reality car simulation application incorporating a head-mounted display with integrated eye tracking functionality. In this context, we provide a quantitative evaluation in terms of a runtime analysis of the different steps of our approach. We provide also several example applications including an interactive web application which visualizes the concepts and calculations presented in this paper.Item Redirected Steering for Virtual Self-Motion Control with a Motorized Electric Wheelchair(The Eurographics Association, 2012) Fiore, Loren Puchalla; Phillips, Lane; Bruder, Gerd; Interrante, Victoria; Steinicke, Frank; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David RobertsRedirection techniques have shown great potential for enabling users to travel in large-scale virtual environments while their physical movements have been limited to a much smaller laboratory space. Traditional redirection approaches introduce a subliminal discrepancy between real and virtual motions of the user by subtle manipulations, which are thus highly dependent on the user and on the virtual scene. In the worst case, such approaches may result in failure cases that have to be resolved by obvious interventions, e. g., when a user faces a physical obstacle and tries to move forward. In this paper we introduce a remote steering method for redirection techniques that are used for physical transportation in an immersive virtual environment. We present a redirection controller for turning a legacy wheelchair device into a remote control vehicle. In a psychophysical experiment we analyze the automatic angular motion redirection with our proposed controller with respect to detectability of discrepancies between real and virtual motions. Finally, we discuss this redirection method with its novel affordances for virtual traveling.