JVRC11: Joint Virtual Reality Conference of EGVE - EuroVR
Permanent URI for this collection
Browse
Browsing JVRC11: Joint Virtual Reality Conference of EGVE - EuroVR by Title
Now showing 1 - 16 of 16
Results Per Page
Sort Options
Item Adapting Standard Video Codecs for Depth Streaming(The Eurographics Association, 2011) Pece, Fabrizio; Kautz, Jan; Weyrich, Tim; Sabine Coquillart and Anthony Steed and Greg WelchCameras that can acquire a continuous stream of depth images are now commonly available, for instance the Microsoft Kinect. It may seem that one should be able to stream these depth videos using standard video codecs, such as VP8 or H.264. However, the quality degrades considerably as the compression algorithms are geared towards standard three-channel (8-bit) colour video, whereas depth videos are single-channel but have a higher bit depth.We present a novel encoding scheme that efficiently converts the single-channel depth images to standard 8-bit three-channel images, which can then be streamed using standard codecs. Our encoding scheme ensures that the compression affects the depth values as little as possible. We show results obtained using two common video encoders (VP8 and H.264) as well as the results obtained when using JPEG compression. The results indicate that our encoding scheme performs much better than simpler methodsItem Bimanual Haptic Simulator for Medical Training: System Architecture and Performance Measurements(The Eurographics Association, 2011) Ullrich, Sebastian; Rausch, Dominik; Kuhlen, Torsten; Sabine Coquillart and Anthony Steed and Greg WelchIn this paper we present a simulator for two-handed haptic interaction. As an application example, we chose a medical scenario that requires simultaneous interaction with a hand and a needle on a simulated patient. The system combines bimanual haptic interaction with a physics-based soft tissue simulation. To our knowledge the combination of finite element methods for the simulation of deformable objects with haptic rendering is seldom addressed, especially with two haptic devices in a non-trivial scenario. Challenges are to find a balance between real-time constraints and high computational demands for fidelity in simulation and to synchronize data between system components. The system has been successfully implemented and tested on two different hardware platforms: one mobile on a laptop and another stationary on a semi-immersive VR system. These two platforms have been chosen to demonstrate scaleability in terms of fidelity and costs. To compare performance and estimate latency, we measured timings of update loops and logged event-based timings of several components in the software.Item Exploring Frictional Surface Properties for Haptic-Based Online Shopping(The Eurographics Association, 2011) Bamarouf, Yasser A.; Smith, Shamus P.; Sabine Coquillart and Anthony Steed and Greg WelchThe sense of touch is important in our everyday lives and its absence makes it difficult to explore and manipulate everyday objects. Existing online shopping practice lacks the opportunity for physical evaluation, that people often use and value when making product buying decisions. The work described here investigates differential thresholds for simulated frictional surfaces, an important haptic feature for product comparison. One aim is to gain insight into the design space for multiple comparisons of virtual surfaces as will be needed to support online shopping. A user study has been conducted to explore differential thresholds in stick-slip frictional force. The study demonstrates that, on average, a dynamic friction threshold of 14.1% is needed to differentiate between two frictional surfaces. Moreover, it has shown, for a Phantom Omni, that the maximum number of unique comparable dynamic coefficient of friction combinations available is twenty eight, at any given level of static coefficient of friction. The results are a step towards defining surface differential thresholds for online shopping and other haptic-based applications that require multiple surface comparisons.Item The Impact of Viewing Stereoscopic Displays on the Visual System(The Eurographics Association, 2011) Howarth, Peter A.; Underwood, P. J.; Sabine Coquillart and Anthony Steed and Greg WelchThe study examined the effects of varying the accommodation-convergence conflict created by stereoscopic displays which are now commonly used for the viewing of virtual environments, television and cinema. These displays will dissociate the naturally co-varying accommodation (focusing) and convergence (eye position) demands by placing an image geometrically behind or in front of the screen, and it has been suggested that the unnatural conflict between these demands will cause discomfort. Commercially available stereoscopic equipment was used to create a stimulus with four different levels of conflict, one of which was a control condition of zero conflict. Sixteen participants, each with normal visual systems, were presented with all four conditions in a balanced experimental design. The changes in visual discomfort, near heterophoria, distance heterophoria and visual acuity were assessed. Clear changes in comfort were observed, although no significant associated physiological changes were observed. The model which best describes the relationship between the conflict and the discomfort is one in which a small amount of conflict does not cause visual discomfort, whereas a larger amount will do so. This finding is consistent with expectations based on historical optometric experiments, which indicate that the normal visual system can maintain comfortable vision whilst experiencing small discrepancies between the accommodation and convergence demands. Our results indicate that visual discomfort occurs beyond a given conflict threshold and continues to rise as the conflict increases. They are consistent with the idea that this threshold is idiosyncratic to the individual. The principal implication of these findings is that people with normal visual systems should not experience asthenopic symptoms as a consequence of the accommodation-convergence conflict if the difference between the stimulus to each system is small.Item Integrating Semantic Directional Relationships into Virtual Environments: A Meta-modelling Approach(The Eurographics Association, 2011) Trinh, Thanh-Hai; Chevaillier, Pierre; Barange, M.; Soler, J.; Loor, P. De; Querrec, R.; Sabine Coquillart and Anthony Steed and Greg WelchThis study is concerned with semantic modelling of virtual environments (VEs). A semantic model of a VE provides an abstract and high level representation of main aspects of the environment: ontological structures, behaviours and interactions of entities, etc. Furthermore, such a semantic model can be explored by artificial agents to exhibit human-like behaviours or to assist users in the VE. Previous research focused on formalising a knowledge layer that is a conceptual representation of scene content or application's entities. However, there still lacks of a semantic representation of spatial knowledge. This paper proposes to integrate a semantic model of directional knowledge into VEs. Such a directional model allows to specify relationships such as left , right , above or north , south that are critical in many applications of VEs (e.g., VEs for training, navigation aid systems). We focus particularly on modelling, computing, and visualising directional relationships. First, we propose a theoretical model of direction in VEs that enables the specification of direction both from a first- and third-person perspective. Second, we propose a generic architecture for modelling direction in VEs using a meta-modelling approach. Directional relationships are described in a qualitative manner and at a conceptual level, and thus are abstract from metrical details of VEs. Finally, we show how our semantic model of direction can be used in a cultural heritage application to specify behaviours of artificial agents and to visualise directional constraints.Item An Interdisciplinary VR-architecture for 3D Chatting with Non-verbal Communication(The Eurographics Association, 2011) Gobron, Stephane; Ahn, Junghyun; Silvestre, Quentin; Thalmann, Daniel; Rank, Stefan; Skowron, Marcin; Paltoglou, Georgios; Thelwall, Michael; Sabine Coquillart and Anthony Steed and Greg WelchThe communication between avatar and agent has already been treated from different but specialized perspectives. In contrast, this paper gives a balanced view of every key architectural aspect: from text analysis to computer graphics, the chatting system and the emotional model. Non-verbal communication, such as facial expression, gaze, or head orientation is crucial to simulate realistic behavior, but is still an aspect neglected in the simulation of virtual societies. In response, this paper aims to present the necessary modularity to allow virtual humans (VH) conversation with consistent facial expression -either between two users through their avatars, between an avatar and an agent, or even between an avatar and a Wizard of Oz. We believe such an approach is particularly suitable for the design and implementation of applications involving VHs interaction in virtual worlds. To this end, three key features are needed to design and implement this system entitled 3D-emoChatting. First, a global architecture that combines components from several research fields. Second, a real-time analysis and management of emotions that allows interactive dialogues with non-verbal communication. Third, a model of a virtual emotional mind called emoMind that allows to simulate individual emotional characteristics. To conclude the paper, we briefly present the basic description of a user-test which is beyond the scope of the present paper.Item Modeling the Effect of Force Feedback for 3D Steering Tasks(The Eurographics Association, 2011) Liu, Lei; Liere, Robert van; Kruszynski, Krzysztof J.; Sabine Coquillart and Anthony Steed and Greg WelchPath steering is an interaction task of how quickly one may navigate through a path. The steering law, proposed by Accot and Zhai [AZ97], is a predictive model which describes the time to accomplish a 2D steering task as a function of the path length and width. In this paper, we study a 3D steering task in the presence of force feedback. Our goal is to extend the application of the steering law in such a task and find out, if possible, additional predictors for users' temporal performance. In particular, we quantitatively examine how the amount of force feedback influences the movement time. We have carried out a repeated-measures-design experiment with varying path length, width and force magni- tude. The results indicate that the movement time can be successfully modeled by path length, width and force magnitude. The relationship evidences that the efficiency of the tasks can be improved once an appropriate force magnitude is applied. Additionally, we have compared the capacity of our model to the steering law. According to Akaike Information Criterion (AIC), our model provides a better description for the movement time when the force magnitude can vary. The new model can be utilized as a guideline for designing the experiments with a haptic device.Item Novative Rendering and Physics Engines to Apprehend Special Relativity(The Eurographics Association, 2011) Doat, Tony; Parizot, Etienne; Vézien, Jean-Marc; Sabine Coquillart and Anthony Steed and Greg WelchRelativity, as introduced by Einstein, is regarded as one of the most important revolutions in the history of physics. Nevertheless, the observation of direct outcomes of this theory on mundane objects is impossible because they can only be witnessed when relative velocities close the speed of light are involved. These effects are so counterintuitive and contradicting with our daily understanding of space and time that physics students find it hard to learn Special Relativity beyond mathematical equations and to understand the deep implications of the theory. Although we cannot travel at the speed of light for real, Virtual Reality makes it possible to experiment the effects of relativity in a 3D immersive environment. Our project is a framework designed to merge advanced 3D graphics with Virtual Reality interfaces in order to create an appropriate environment to study and learn relativity as well as to develop some intuition of the relativistic effects and the quadri-dimensional reality of space-time. In this paper, we focus on designing and implementing an easy-to-use game-like application : a carom billiard. Our implementation includes relativistic effects in an innovative graphical rendering engine and a non-Newtonian physics engine to treat the collisions. The innovation of our approach lies in the ability i) to render in real-time several relativistic objects, each moving with a different velocity vector (contrary to what was achieved in previous works), ii) to allow for interactions between objects, and iii) to enable the user to interact with the objects and modify the scene. To achieve this, we implement the 4D nature of space-time directly at the heart of the rendering engine, and develop an algorithm allowing to access non-simultaneous past events that are visible to the observers at their specific locations and at a given instant of their proper time. We explain how to retrieve the collision event between the pucks and the cushions of the billiard game and we show several counterintuitive results for very fast pucks. The effectiveness of the approach is demonstrated with snapshots of videos where several independent objects travel at velocities close to the speed of light, c.Item Panoramic Video Techniques for Improving Presence in Virtual Environments(The Eurographics Association, 2011) Dalvandi, Arefe; Riecke, Bernhard E.; Calvert, Tom; Sabine Coquillart and Anthony Steed and Greg WelchPhoto-realistic techniques that use sequences of images captured from a real environment can be used to create virtual environments (VEs). Unlike 3D modelling techniques, the required human work and computation are independent of the amounts of detail and complexity that exist in the scene, and in addition they provide great visual realism. In this study we created virtual environments using three different photo-realistic techniques: panoramic video, regular video, and a slide show of panoramic still images. While panoramic video offered continuous movement and the ability to interactively change the view, it was the most expensive and time consuming to produce among the three techniques. To assess whether the extra effort needed to create panoramic video is warranted, we analysed how effectively each of these techniques supported a sense of presence in participants. We analysed participants' subjective sense of presence in the context of a navigation task where they travelled along a route in a VE and tried to learn the relative locations of the landmarks on the route. Participants' sense of presence was highest in the panoramic video condition. This suggests that the effort in creating panoramic video might be warranted whenever high presence is desired.Item Realistic Lighting Simulation for Interactive VR Applications(The Eurographics Association, 2011) Löffler, Alexander; Marsalek, Lukas; Hoffmann, Hilko; Slusallek, Philipp; Sabine Coquillart and Anthony Steed and Greg WelchIn the field of aircraft design, interior illumination increasingly becomes an important design element. Different illumination scenarios inside an aircraft cabin are considered to influence the mood of air passengers, help passengers to be better prepared for time lags and to create an overall positive environment. Consequently, a physically correct and realistic lighting simulation becomes essential during the design process. Available tools are producing videos or still images of illumination settings. The main reason for this is that realistic lighting simulation is believed to require heavy offline processing and unfeasible to do from within a real-time system. On the other hand, interactive Virtual Reality (VR) applications are an appropriate tool to experience an aircraft cabin under different illuminations. The ability to integrate lighting simulations into VR applications would simplify the design process remarkably by skipping time-consuming context and tool switches. In this paper, we present a solution for integrating realistic lighting simulation with interactive performance into a single VR application. We explain our integration of real-time ray tracing, interactive global illumination, and measured point lights in a VR system, and its combination with classic rasterization techniques. We describe suitable interaction metaphors to enable realistic lighting simulation, high interactivity and intuitive interaction in an application for light design inside an aircraft cabin.Item Short Paper: Acquisition and Management of Building Materials for VR Applications(The Eurographics Association, 2011) Westner, Phil; Bues, Matthias; Sabine Coquillart and Anthony Steed and Greg WelchThis paper describes a workflow and system to digitize, manage and render large and fast changing sets of building materials, e.g. brick, concrete, ceramic tiles, for VR architectural visualization. We describe the use case of VRbased sampling of detached houses, i.e. the process of choosing the interior and exterior materials of the house to be built. This use case implies some specific requirements: The main material database is very large, in the range of several thousands of individual materials, and is subject to frequent change due to regular changes in the material manufacturer's product collections. In addition, a relatively large number of materials have to be rendered simultaneously in the VR visualization. These requirements imply some limitations on both the material acquisition process, which has to be handled by end users, and the rendering methods useable for this application, which have to deal with the available graphics memory and rendering performance resources. The solution we propose in this paper combines a simple and easy to handle image acquisition process with an automatic generation of image resources and associated parameters, which together form the resources for a shader-based real-time rendering. Despite not being physically correct, this rendering provides a high fidelity in representation of the relevant materials. In addition to the description of our concept, we also discuss the results of its implementation and productive use for a manufacturer of detached houses.Item Short Paper: Comparing Virtual Trajectories Made in Slalom UsingWalking-In-Place and Joystick Techniques(The Eurographics Association, 2011) Terziman, Léo; Marchal, Maud; Multon, Franck; Arnaldi, Bruno; Lécuyer, Anatole; Sabine Coquillart and Anthony Steed and Greg WelchIn this paper we analyze and compare the trajectories made in a Virtual Environment with two different navigation techniques. The first is a standard joystick technique and the second is the Walking-In-Place (WIP) technique. We propose a spatial and temporal analysis of the trajectories produced with both techniques during a virtual slalom task. We found that trajectories and users' behaviors are very different accross the two conditions. Our results notably show that with the WIP technique the users turned more often and navigated more sequentially, i.e. waited to cross obstacles before changing their direction. However, the users were also able to modulate their speed more precisely with the WIP. These results could be used to optimize the design and future implementations of WIP techniques. Our analysis could also become the basis of a future framework to compare other navigation techniques.Item Short Paper: Exploring the Object Relevance of a Gaze Animation Model(The Eurographics Association, 2011) Oyekoya, Oyewole; Steed, Anthony; Pan, Xueni; Sabine Coquillart and Anthony Steed and Greg WelchModels for animating the eyes of virtual characters often focus on making the face appear natural and believable. There has been relatively little work in computer graphics that investigates the relevance of the objects of interest (gaze targets). In this paper, a gaze animation model has been constructed that allocates visual attention to relevant targets from objects that are within the virtual character's field of view in an immersive 3D virtual environment. Relevance is determined by proximity, eccentricity, changes in orientation and velocity of objects in the virtual character's environment. In this paper, two tasks were designed to test the relevance of the objects selected by the gaze animation model during the tasks. Eye tracking data obtained from six human subjects provided benchmark data for measuring the efficiency of the model in picking relevant objects. The gaze animation model largely outperformed a random selection algorithm in predicting the real targets/objects of users' interests within the virtual environment.Item Short Paper: View Dependent Rendering to Simple Parametric Display Surfaces(The Eurographics Association, 2011) Harish, Pawan; Narayanan, P. J.; Sabine Coquillart and Anthony Steed and Greg WelchComputer displays have remained flat and rectangular for the most part. In this paper, we explore parametric display surfaces, which are of arbitrary shape, but with a mapping to a 2D domain for each pixel. The display could have arbitrary curved shapes given by implicit or parametric equations. We present a fast and efficient method to render 3D scenes onto such a display in a perspectively correct manner. Our method tessellates the scene based on the geodesic edge length and a user-defined error threshold. We also modify scene vertices, based on per-vertex ray casting, so that the final image appears correct to a user's viewpoint. The ray-surface intersection procedure, geodesic length computation and 2D image mapping are assumed to be known for the given surface. We exploit the tessellation hardware of the SM 5:0 GPUs to perform the error checking, polygon splitting, and rendering in a single pass. This brings the performance of our approach closer to rasterization schemes, without needing ray tracing. Our scheme does not interpolate pixels, ensuring high quality. We demonstrate real display prototypes and show scalability of our system using simulated scenarios.Item User Centred Methods for Gathering VR Design Tool Requirements(The Eurographics Association, 2011) Thalen, Jos P.; Voort, M. C. van der; Sabine Coquillart and Anthony Steed and Greg WelchThis paper addresses the use of VR to facilitate design tasks in the early stages of a product design process. A preliminary exploratory study, involving over thirty interviews amongst four industrial partners, revealed only few occurrences of VR being used in the early stages of design. While the potential benefits of the applications are generally acknowledged, product designers lack the appropriate design tools that allow them to quickly and easily create the application. The research presented in this paper applies user-centred design principles to identify requirements for useful, usable and accessible VR design tools. The primary challenge in gathering such requirements is the lack of experience product designers generally have with VR technologies; product designers can not provide reliable requirements for tools they have never seen or used. We present a sequence of three concrete steps that provide product designers with sufficient information to express tool requirements, without developing extensive prototypes. The three methods have been developed and applied in an industrial case study, as part of a larger research project. The paper outlines this research context, the three methods and the lessons learned from the case study.Item V3S, a Virtual Environment for Risk Management Training(The Eurographics Association, 2011) Barot, Camille; Burkhardt, Jean-Marie; Lourdeaux, Domitile; Lenne, Dominique; Sabine Coquillart and Anthony Steed and Greg WelchIn high risk industries, risk management training has become a major issue. It requires not only to teach rules and procedures, but also to promote a real understanding of the risks that are at stake and to train learners to work in degraded situations (stress, difficult co-activity, damaged equipment...). In this paper, we present the outcomes of the V3S project. This project resulted in a virtual environment focused on the visualization of errors consequences, whether they are made by the learner or by the virtual autonomous characters that populate this environment. To allow the representation of errors and compromises, we developed a task description language to model learners' and autonomous characters' situated knowledge about their tasks. These models are used to monitor learners' actions and to generate virtual characters' behaviours. The evaluation has shown a high satisfaction level and encouraging usability measures. As a future work, we propose to extend the possibilities of the simulation through the creation and monitoring of adaptive scenarios. Our objective here is twofold: support roleplay-like learning situations inspired by game-based learning and interactive storytelling, and dynamically adapt the difficulty to learner's performances by adjusting the behaviour of virtual characters able to assist or disrupt the user