JVRC 13: Joint Virtual Reality Conference of EGVE - EuroVR
Permanent URI for this collection
Browse
Browsing JVRC 13: Joint Virtual Reality Conference of EGVE - EuroVR by Subject "augmented"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Exploring Distant Objects with Augmented Reality(The Eurographics Association, 2013) Tatzgern, Markus; Grasset, Raphael; Veas, Eduardo; Kalkofen, Denis; Seichter, Hartmut; Schmalstieg, Dieter; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver StaadtAugmented reality (AR) enables users to retrieve additional information about the real world objects and locations. Exploring such location-based information in AR requires physical movement to different viewpoints, which may be tiring and even infeasible when viewpoints are out of reach. In this paper, we present object-centric exploration techniques for handheld AR that allow users to access information freely using a virtual copy metaphor to explore large real world objects. We evaluated our interfaces in controlled conditions and collected first experiences in a real world pilot study. Based on our findings, we put forward design recommendations that should be considered by future generations of location-based AR browsers, 3D tourist guides, or in situated urban planning.Item Personalized Animatable Avatars from Depth Data(The Eurographics Association, 2013) Mashalkar, Jai; Bagwe, Niket; Chaudhuri, Parag; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver StaadtAbstract We present a method to create virtual character models of real users from noisy depth data. We use a combination of four depth sensors to capture a point cloud model of the person. Direct meshing of this data often creates meshes with topology that is unsuitable for proper character animation. We develop our mesh model by fitting a single template mesh to the point cloud in a two-stage process. The first stage fitting involves piecewise smooth deformation of the mesh, whereas the second stage does a finer fit using an iterative Laplacian framework. We complete the model by adding properly aligned and blended textures to the final mesh and show that it can be easily animated using motion data from a single depth camera. Our process maintains the topology of the original mesh and the proportions of the final mesh match the proportions of the actual user, thus validating the accuracy of the process. Other than the depth sensor, the process does not require any specialized hardware for creating the mesh. It is efficient, robust and is mostly automatic.Item Semantic Modelling of Interactive 3D Content(The Eurographics Association, 2013) Flotynski, Jakub; Walczak, Krzysztof; Betty Mohler and Bruno Raffin and Hideo Saito and Oliver StaadtInteractive three-dimensional content is the primary element of virtual reality (VR) and augmented reality (AR) systems. The increasing complexity and the use of VR/AR systems in various application domains requires efficient methods of creating, searching and combining interactive 3D content, which could be used by people with different specialities, who are not required to be IT-experts. The Semantic Web approach enables description of web resources with common semantic concepts. However, the use of semantic concepts may also facilitate creation of 3D content. The main contribution of this paper is a method of semantic modelling of interactive 3D content. The method leverages semantic constraints between different components of 3D content as well as representations of 3D content at different levels of abstraction. It can be used with a multitude of domain-specific ontologies and knowledge bases to simplify creating and searching of reusable semantic 3D content components and assembling complex 3D scenes from independent distributed elements.