EGVE08 14th Eurographics Symposium on Virtual Environments
Permanent URI for this collection
Browse
Browsing EGVE08 14th Eurographics Symposium on Virtual Environments by Title
Now showing 1 - 16 of 16
Results Per Page
Sort Options
Item BAT - a Distributed Meta-tracking System(The Eurographics Association, 2008) Kahlesz, Ferenc; Klein, Reinhard; Robert van Liere and Betty MohlerThis paper describes the design of the 'BAT' (Bonn Articulated Tracker) visual tracking framework. This system allows the easy implementation of real-time, multi-camera motion tracking that can be distributed (also in multithreaded sense) across several computing nodes (or CPU cores). The system in itself does not realize any specific tracking system, but manages a meta-algorithm flow between processing blocks. An actual tracking implementation is realized by specifying the processing blocks through plugins. Depending on the plugins supplied, 'BAT' is capable to instantiate a wide-variety of systems ranging from object-detection methods to model-based deformable object tracking based on time-coherence, allowing also for hybrid algorithms. Being a meta dataflow system , 'BAT' also naturally facilitates sensor fusion. Moreover, it can be used as a testbed to compare and evaluate different kind of tracking algorithms or algorithm substeps.Item Camera Calibration of a NintendoWii Remote using PA-10 Robotic Arms(The Eurographics Association, 2008) Grimes, Holly S.; Ferguson, Robin Stuart; McMenemy, Karen; Robert van Liere and Betty MohlerThe Remote control unit for the Nintendo Wii computer games console features an infrared camera capable of detecting up to 4 infrared lights. As such, it is possible to calibrate this device using normal camera calibration techniques so that it can be used for tracking or human interfacing in a virtual environment. Camera calibration is an active area of research and there is a wealth of software available to perform the calibration e.g. ArToolKit and Matlab. Camera calibration typically requires at least five images which contain multiple grid points from which a matrix of camera parameters can be estimated. This paper proposes a method of calibrating the Wii Remote's IR camera by building up 24 calibration points from the 4 viewed infrared LEDs for a single viewpoint of the IR camera. This is done by moving the 4 LEDs in a known sequence using highly accurate PA-10 robotic arms. The camera calibration matrix parameters obtained from this method are presented.Item Display Devices for Virtual Environments: Impact on Performance, Workload, and Simulator Sickness(The Eurographics Association, 2008) Conradi, Jessica; Alexander, Thomas; Robert van Liere and Betty MohlerUsability and, thus, success of Virtual Environments (VE) systems are closely related to the type of display used. Applicable VE-displays range from simple desktop monitors with low immersion to high-end, immersive HMDs. It is often inferred that more sophisticated displays always produce higher performance. In this paper this opinion is critically questioned. To estimate effectiveness and usability of the display measures of human performance, subjective workload, and simulator sickness serve as critical criteria. The effect of three different displays (desktop monitor, projection wall, HMD) with varying degrees of immersion on each of the criteria was analyzed empirically. In the experiment nItem Effect of the Size of the Field of View on the Perceived Amplitude of Rotations of the Visual Scene(The Eurographics Association, 2008) Ogier, Maelle; Bülthoff, Heinrich H.; Bresciani, Jean-Pierre; Robert van Liere and Betty MohlerEfficient navigation requires a good representation of body position/orientation in the environment and an accurate updating of this representation when the body-environment relationship changes. We tested here whether the visual flow alone - i.e., no landmark - can be used to update this representation when the visual scene is rotated, and whether having a limited horizontal field of view (30 or 60 degrees), as it is the case in most virtual reality applications, degrades the performance as compared to a full field of view. Our results show that (i) the visual flow alone does not allow for accurately estimating the amplitude of rotations of the visual scene, notably giving rise to a systematic underestimation of rotations larger than 30 degrees, and (ii) having more than 30 degrees of horizontal field of view does not really improve the performance. Taken together, these results suggest that a 30 degree field of view is enough to (under)estimate the amplitude of visual rotations when only visual flow information is available, and that landmarks should probably be provided if the amplitude of the rotations has to be accurately perceived.Item A Framework for Performance Evaluation of Model-Based Optical Trackers(The Eurographics Association, 2008) Smit, Ferdi A.; Liere, Robert van; Robert van Liere and Betty MohlerWe describe a software framework to evaluate the performance of model-based optical trackers in virtual environments. The framework can be used to evaluate and compare the performance of different trackers under various conditions, to study the effects of varying intrinsic and extrinsic camera properties, and to study the effects of environmental conditions on tracker performance. The framework consists of a simulator that, given various input conditions, generates a series of images. The input conditions of the framework model important aspects, such as the interaction task, input device geometry, camera properties and occlusion. As a concrete case, we illustrate the usage of the proposed framework for input device tracking in a near-field desktop virtual environment. We compare the performance of an in-house tracker with the ARToolkit tracker under a fixed set of conditions. We also show how the framework can be used to find the optimal camera parameters given a pre-recorded interaction task. Finally, we use the framework to determine the minimum required camera resolution for a desktop, Workbench and CAVE environment. The framework is shown to provide an efficient and simple method to study various conditions affecting optical tracker performance. Furthermore, it can be used as a valuable development tool to aid in the construction of optical trackers.Item The Influence of Head Tracking and Stereo on User Performance with Non-Isomorphic 3D Rotation(The Eurographics Association, 2008) Jr., Joseph J. LaViola; Forsberg, Andrew S.; Huffman, John; Bragdon, Andrew; Robert van Liere and Betty MohlerWe present an experimental study that explores how head tracking and stereo viewing affect user performance when rotating 3D virtual objects using isomorphic and non-isomorphic rotation techniques. Our experiment com- pares isomorphic with non-isomorphic rotation utilizing four different display modes (no head tracking/no stereo, head tracking/no stereo, no head tracking/stereo, and head tracking/stereo) and two different angular error thresh- olds for task completion. Our results indicate that rotation error is significantly reduced when subjects perform the task using non-isomorphic 3D rotation with head tracking/stereo than with no head tracking/no stereo. In addition, subjects performed the rotation task with significantly less error with head tracking/stereo and no head tracking/stereo than with no head tracking/no stereo, regardless of rotation technique. The majority of the subjects tested also felt stereo and non-isomorphic amplification was important in the 3D rotation task.Item Integrating Particle Dispersion Models into Real-time Virtual Environments(The Eurographics Association, 2008) Willemsen, Peter; Norgren, Andrew; Singh, Balwinder; Pardyjak, Eric R.; Robert van Liere and Betty MohlerWe present a system for interacting with fast response particle dispersion models in a real-time virtual envi- ronment. The particle simulation system is used as both an engineering tool for modeling and understanding turbulence in flow fields and as an integral part of an atmospheric, wind display for real-time virtual environ- ments. The focus of this paper is on the use of virtual environment visualization and interaction as a means for providing deeper insight about the underlying turbulence and flow field models in the dispersion simulations. The use of immersive virtual environment displays, as opposed to standard computer display screens, allows engineers to acquire novel view points and interact with the data in novel ways. We present our current visualization and interaction methods. Our goal is to develop a tool for interacting with, controlling, and manipulating the model parameters of fast response particle dispersion systems while immersed within the simulation environment.Item Interactive and Accurate Collision Detection in Virtual Orthodontics(The Eurographics Association, 2008) Rodrigues, Maria Andréia F.; Rocha, Rafael S.; Silva, Wendel B.; Robert van Liere and Betty MohlerAn interactive computer-based training tool for using in Orthodontics is aimed at students and experienced professionals who need to predict orthodontic treatment outcomes, including the determination whether and where the teeth are coming into contact. Fundamental to achieve the best possible fit is the relative position of the teeth within the dental arch and with the opposing arches. In this paper we present the implementation and analysis of different types of discrete and continuous collision detection algorithms that meet performance goals, suitable for moving teeth simulation. The obtained results show that the collision detection algorithms implemented using Sphere-Trees provide quite acceptable accuracy while maintaining interactive visualization.Item miniCAVE - A Fully immersive Display System Using Consumer Hardware(The Eurographics Association, 2008) Schlechtweg, Stefan; Robert van Liere and Betty MohlerWe present the design and construction of a small scale CAVE that employs active stereo using shutter glasses for viewing as well as for projection. It is controlled by a single server with four graphics boards. The main use of this device is educational, students should be given a possibility to see their programs run in an immersive setting. Further applications include VR design studies and the development of new VR interaction techniques using the Nintendo Wii controller as input device.Item Real-time Color Ball Tracking for Augmented Reality(The Eurographics Association, 2008) Sýkora, Daniel; Sedlácek, David; Riege, Kai; Robert van Liere and Betty MohlerIn this paper, we introduce a light-weight and robust tracking technique based on color balls. The algorithm builds on a variant of randomized Hough transform and is optimized for a use in real-time applications like on low-cost Augmented Reality (AR) systems. With just one conventional color camera our approach provides the ability to determine the 3D position of several color balls at interactive frame rates on a common PC workstation. It is fast enough to be easily combined with another real-time tracking engine. In contrast to popular tracking techniques based on recognition of planar fiducial markers it offers robustness to partial occlusion, which eases handling and manipulation. Furthermore, while using balls as markers a proper haptic feedback and visual metaphor is provided. The exemplary use of our technique in the context of two AR applications indicates the effectiveness of the proposed approach.Item Selective Stylization for Visually Uniform Tangible AR(The Eurographics Association, 2008) Fischer, Jan; Flohr, Daniel; Straßer, Wolfgang; Robert van Liere and Betty MohlerIn tangible user interfaces, physical props are used for direct interaction with a computer system. Tangible interaction applications often use augmented reality display techniques in order to overlay virtual graphical objects over the interaction area. This typically leads to the problem that the virtual augmentations have a distinct, simple computer-generated look, which makes them easily distinguishable from the real environment and physical props. Here, we present a new style of tangible interaction, which seamlessly combines real objects and graphical models. In the tangible user interaction zone of our system, physical objects and virtual models are displayed in the same technical illustration style. Regions outside of the interaction zone, and also the user's hands, are shown unaltered in order to maintain an unmodified visual feedback in these areas. Our example application is a tangible urban planning environment, in which the placement of both real and virtual building models affects the flow of wind and the casting of shadows. We describe the real-time rendering pipeline which generates the selectively stylized output images in the tangible interaction system and discuss the functionality of the urban planning application.Item Spotlight Interest Management for Distributed Virtual Environments(The Eurographics Association, 2008) Dunwell, Ian; Whelan, John C.; Robert van Liere and Betty MohlerThis paper presents a novel refinement to visual attention-based interest management in distributed virtual environments (VEs). It is suggested that in the context of a desktop VE where only limited immersion occurs, using proximity in virtual space as a primary measure of relevance may be less effective than considering the characteristics of visual interaction with the two-dimensional display. The method seeks to utilise a spotlight model of human attention in place of a proximity measure, capable of giving extremely distant clients near the centre of the display priority. In order to evaluate the technique, a series of user experiments are described which seek to study the participant's ability to detect change between techniques in a proprietary collaborative virtual environment. Two groups of users are shown to exhibit a blind preference for the spotlight method, and failed to detect a significant change when available bandwidth was reduced using this approach. The technique may be integrated alongside existing saliency-based interest management paradigms as an alternative to the distance-based factor.Item Supervision of 3D Multimodal Rendering for Protein-protein Virtual Docking(The Eurographics Association, 2008) Bouyer, Guillaume; Bourdot, Patrick; Robert van Liere and Betty MohlerProtein-Protein docking is a recent practice in biological research which involves using 3D models of proteins to predict the structure of complexes formed by these proteins. Currently, the most common methods used for docking are fully computational approaches, combined with molecular visualization tools. However, these approaches are time consuming and provide a large number of potential solutions. Our basic hypothesis is that Virtual Reality (VR) interactions can combine the benefits of multimodal rendering, biologist's expertise in the field of docking with automated algorithms, in order to increase efficiency in reaching docking solutions. To this end, we have designed an immersive and multimodal application for molecular docking. Visual, audio and haptic feedbacks are combined to communicate biological information, help manipulating proteins and exploring possible solutions of docking. Multimodal distribution is supervised by a rule-based software module, depending on the interaction context.Item Tangible Interaction for 3D Widget Manipulation in Virtual Environments(The Eurographics Association, 2008) Kruszynski, Krzysztof J.; Liere, Robert van; Robert van Liere and Betty MohlerIn this paper we explore the usage of tangible controllers for the manipulation of 3D widgets in scientific visualization applications. Tangible controllers can be more efficient than unrestricted 6-DOF devices, since many 3D widgets impose some restrictions on how they can be manipulated. In particular for tasks that are in essence two-dimensional, such as drawing a contour on a surface, tangible controllers have advantages over 6-DOF devices. We have conducted a user study in which subjects draw a contour on a three-dimensional curved surface using a 3D contour drawing widget. We compared four different input methods for controlling the contour drawing widget and the viewpoint of the surface: using one 2D mouse for drawing and viewpoint selection, using a 6-DOF pen for drawing and a 6-DOF cube device for viewpoint selection, using a 6-DOF pen for drawing on a tangible 6-DOF cube which implements a Magic Lens style visualization technique, and using a 2D mouse for drawing and a 6-DOF cube for viewpoint selection. We show that while the mouse outperforms 6-DOF input methods, the tangible controller is superior to unrestricted 6-DOF input.Item Using Teleporting, Awareness and Multiple Views to Improve Teamwork in Collaborative Virtual Environments(The Eurographics Association, 2008) Dodds, Trevor J.; Ruddle, Roy A.; Robert van Liere and Betty MohlerMobile Group Dynamics (MGDs) are a suite of techniques that help people work together in large-scale col- laborative virtual environments (CVEs). The present paper describes the implementation and evaluation of three additional MGDs techniques (teleporting, awareness and multiple views) which, when combined, produced a 4 times increase in the amount that participants communicated in a CVE and also significantly increased the extent to which participants communicated over extended distances in the CVE. The MGDs were evaluated using an urban planning scenario using groups of either seven (teleporting + awareness) or eight (teleporting + awareness + multiple views) participants. The study has implications for CVE designers, because it provides quantitative and qualitative data about how teleporting, awareness and multiple views improve groupwork in CVEs.Item A View Management Method for Mobile Mixed Reality Systems(The Eurographics Association, 2008) Shibata, Fumihisa; Nakamoto, Hiroyuki; Sasaki, Ryoichi; Kimura, Asako; Tamura, Hideyuki; Robert van Liere and Betty MohlerThis paper describes a view management method for annotations on mobile mixed reality systems. In recent years, mobile devices, such as mobile phones and PDAs, have rapidly gained popularity. Thus, MR systems using mo- bile devices are expected to be a rich field. However, the mobile devices' components (screen and application executable memory) are very small compared to those of notebook computers. Also, the processor does not have enough computational power. Thus, it is difficult for such devices to apply existing view management techniques. We propose a view management method that consists of seven sub-functions. Each sub-function is implemented by a simple algorithm, while a system developer assembles required sub-functions into a customized view manage- ment component.We implemented some applications based on the proposed method and discussed the effectiveness of the view management algorithm.