JVRC12: Joint Virtual Reality Conference of ICAT - EGVE - EuroVR

Permanent URI for this collection


Haptic Rendering of Dynamic Image Sequence Using String based Haptic Device SPIDAR

Jayasiri, Anusha
Akahane, Katsuhito
Sato, Makoto

Modifying an Identified Size of Objects Handled with Two Fingers Using Pseudo-Haptic Effects

Ban, Yuki
Narumi, Takuji
Tanikawa, Tomohiro
Hirose, Michitaka

Indoor Tracking for Large Area Industrial Mixed Reality

Scheer, Fabian
Müller, Stefan

Towards Interacting with Force-Sensitive Thin Deformable Virtual Objects

Hummel, Johannes
Wolff, Robin
Dodiya, Janki
Gerndt, Andreas
Kuhlen, Torsten

Fast Motion Rendering for Single-Chip Stereo DLP Projectors

Lancelle, Marcel
Voß, Gerrit
Fellner, Dieter W.

Networked Displays for VR Applications: Display as a Service

Löffler, Alexander
Pica, Luciano
Hoffmann, Hilko
Slusallek, Philipp

3D User Interfaces Using Tracked Multi-touch Mobile Devices

Wilkes, Curtis B.
Tilden, Dan
Bowman, Doug A.

Floor-based Audio-Haptic Virtual Collision Responses

Blom, Kristopher J.
Haringer, Matthias
Beckhaus, Steffi

Comparing Auditory and Haptic Feedback for a Virtual Drilling Task

Rausch, Dominik
Aspöck, Lukas
Knott, Thomas
Pelzer, Sönke
Vorländer, Michael
Kuhlen, Torsten

Redirected Steering for Virtual Self-Motion Control with a Motorized Electric Wheelchair

Fiore, Loren Puchalla
Phillips, Lane
Bruder, Gerd
Interrante, Victoria
Steinicke, Frank

Trade-Offs Related to Travel Techniques and Level of Display Fidelity in Virtual Data-Analysis Environments

Ragan, Eric D.
Wood, Andrew
McMahan, Ryan P.
Bowman, Doug A.

An Empiric Evaluation of Confirmation Methods for Optical See-Through Head-Mounted Display Calibration

Maier, Patrick
Dey, Arindam
Waechter, Christian A. L.
Sandor, Christian
Tönnis, Marcus
Klinker, Gudrun


BibTeX (JVRC12: Joint Virtual Reality Conference of ICAT - EGVE - EuroVR)
@inproceedings{
10.2312:EGVE/JVRC12/009-015,
booktitle = {
Joint Virtual Reality Conference of ICAT - EGVE - EuroVR},
editor = {
Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
}, title = {{
Haptic Rendering of Dynamic Image Sequence Using String based Haptic Device SPIDAR}},
author = {
Jayasiri, Anusha
 and
Akahane, Katsuhito
 and
Sato, Makoto
}, year = {
2012},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-40-8},
DOI = {
10.2312/EGVE/JVRC12/009-015}
}
@inproceedings{
10.2312:EGVE/JVRC12/001-008,
booktitle = {
Joint Virtual Reality Conference of ICAT - EGVE - EuroVR},
editor = {
Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
}, title = {{
Modifying an Identified Size of Objects Handled with Two Fingers Using Pseudo-Haptic Effects}},
author = {
Ban, Yuki
 and
Narumi, Takuji
 and
Tanikawa, Tomohiro
 and
Hirose, Michitaka
}, year = {
2012},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-40-8},
DOI = {
10.2312/EGVE/JVRC12/001-008}
}
@inproceedings{
10.2312:EGVE/JVRC12/021-028,
booktitle = {
Joint Virtual Reality Conference of ICAT - EGVE - EuroVR},
editor = {
Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
}, title = {{
Indoor Tracking for Large Area Industrial Mixed Reality}},
author = {
Scheer, Fabian
 and
Müller, Stefan
}, year = {
2012},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-40-8},
DOI = {
10.2312/EGVE/JVRC12/021-028}
}
@inproceedings{
10.2312:EGVE/JVRC12/017-020,
booktitle = {
Joint Virtual Reality Conference of ICAT - EGVE - EuroVR},
editor = {
Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
}, title = {{
Towards Interacting with Force-Sensitive Thin Deformable Virtual Objects}},
author = {
Hummel, Johannes
 and
Wolff, Robin
 and
Dodiya, Janki
 and
Gerndt, Andreas
 and
Kuhlen, Torsten
}, year = {
2012},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-40-8},
DOI = {
10.2312/EGVE/JVRC12/017-020}
}
@inproceedings{
10.2312:EGVE/JVRC12/029-036,
booktitle = {
Joint Virtual Reality Conference of ICAT - EGVE - EuroVR},
editor = {
Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
}, title = {{
Fast Motion Rendering for Single-Chip Stereo DLP Projectors}},
author = {
Lancelle, Marcel
 and
Voß, Gerrit
 and
Fellner, Dieter W.
}, year = {
2012},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-40-8},
DOI = {
10.2312/EGVE/JVRC12/029-036}
}
@inproceedings{
10.2312:EGVE/JVRC12/037-044,
booktitle = {
Joint Virtual Reality Conference of ICAT - EGVE - EuroVR},
editor = {
Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
}, title = {{
Networked Displays for VR Applications: Display as a Service}},
author = {
Löffler, Alexander
 and
Pica, Luciano
 and
Hoffmann, Hilko
 and
Slusallek, Philipp
}, year = {
2012},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-40-8},
DOI = {
10.2312/EGVE/JVRC12/037-044}
}
@inproceedings{
10.2312:EGVE/JVRC12/065-072,
booktitle = {
Joint Virtual Reality Conference of ICAT - EGVE - EuroVR},
editor = {
Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
}, title = {{
3D User Interfaces Using Tracked Multi-touch Mobile Devices}},
author = {
Wilkes, Curtis B.
 and
Tilden, Dan
 and
Bowman, Doug A.
}, year = {
2012},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-40-8},
DOI = {
10.2312/EGVE/JVRC12/065-072}
}
@inproceedings{
10.2312:EGVE/JVRC12/057-064,
booktitle = {
Joint Virtual Reality Conference of ICAT - EGVE - EuroVR},
editor = {
Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
}, title = {{
Floor-based Audio-Haptic Virtual Collision Responses}},
author = {
Blom, Kristopher J.
 and
Haringer, Matthias
 and
Beckhaus, Steffi
}, year = {
2012},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-40-8},
DOI = {
10.2312/EGVE/JVRC12/057-064}
}
@inproceedings{
10.2312:EGVE/JVRC12/049-056,
booktitle = {
Joint Virtual Reality Conference of ICAT - EGVE - EuroVR},
editor = {
Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
}, title = {{
Comparing Auditory and Haptic Feedback for a Virtual Drilling Task}},
author = {
Rausch, Dominik
 and
Aspöck, Lukas
 and
Knott, Thomas
 and
Pelzer, Sönke
 and
Vorländer, Michael
 and
Kuhlen, Torsten
}, year = {
2012},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-40-8},
DOI = {
10.2312/EGVE/JVRC12/049-056}
}
@inproceedings{
10.2312:EGVE/JVRC12/045-048,
booktitle = {
Joint Virtual Reality Conference of ICAT - EGVE - EuroVR},
editor = {
Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
}, title = {{
Redirected Steering for Virtual Self-Motion Control with a Motorized Electric Wheelchair}},
author = {
Fiore, Loren Puchalla
 and
Phillips, Lane
 and
Bruder, Gerd
 and
Interrante, Victoria
 and
Steinicke, Frank
}, year = {
2012},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-40-8},
DOI = {
10.2312/EGVE/JVRC12/045-048}
}
@inproceedings{
10.2312:EGVE/JVRC12/081-084,
booktitle = {
Joint Virtual Reality Conference of ICAT - EGVE - EuroVR},
editor = {
Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
}, title = {{
Trade-Offs Related to Travel Techniques and Level of Display Fidelity in Virtual Data-Analysis Environments}},
author = {
Ragan, Eric D.
 and
Wood, Andrew
 and
McMahan, Ryan P.
 and
Bowman, Doug A.
}, year = {
2012},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-40-8},
DOI = {
10.2312/EGVE/JVRC12/081-084}
}
@inproceedings{
10.2312:EGVE/JVRC12/073-080,
booktitle = {
Joint Virtual Reality Conference of ICAT - EGVE - EuroVR},
editor = {
Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
}, title = {{
An Empiric Evaluation of Confirmation Methods for Optical See-Through Head-Mounted Display Calibration}},
author = {
Maier, Patrick
 and
Dey, Arindam
 and
Waechter, Christian A. L.
 and
Sandor, Christian
 and
Tönnis, Marcus
 and
Klinker, Gudrun
}, year = {
2012},
publisher = {
The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-905674-40-8},
DOI = {
10.2312/EGVE/JVRC12/073-080}
}

Browse

Recent Submissions

Now showing 1 - 12 of 12
  • Item
    Haptic Rendering of Dynamic Image Sequence Using String based Haptic Device SPIDAR
    (The Eurographics Association, 2012) Jayasiri, Anusha; Akahane, Katsuhito; Sato, Makoto; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
    This paper concerns how to associate haptic signals with a dynamic image sequence or in other words a video to feel the haptic motion. Nowadays, there is a significant evolvement in haptic technologies and they are being used in a wide range of application areas. With the invention of digital multimedia and immersive displays, significance of exploring new ways of interacting with video media has grown up. However, the incorporation of haptic interface technology into a dynamic image sequence is still in its infancy. Rather than just seeing and hearing a video, viewers' experience can be further enhanced by letting them feel the movement of the objects in the video through haptic interface, as it is an additional sensation to seeing and hearing. The objective of this research is to use haptic interface technology to interact with a dynamic image sequence and enable the viewers to feel the motion force of objects in the video beyond passive watching and listening. In this paper, we have discussed how to feel the haptic motion, which is computed from frame to frame calculation of velocity using optical flow. For haptic motion rendering, we have evaluated two methods using a gain controller and using a non-linear function to identify a better method. To interact with the video we used the string based haptic device, SPIDAR, which provides a high definition force feedback sensation to users.
  • Item
    Modifying an Identified Size of Objects Handled with Two Fingers Using Pseudo-Haptic Effects
    (The Eurographics Association, 2012) Ban, Yuki; Narumi, Takuji; Tanikawa, Tomohiro; Hirose, Michitaka; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
    In our research, we aim to construct a visuo-haptic system that employs pseudo-haptic effects to provide users with the sensation of touching virtual objects of varying shapes. Thus far, we have proved that it can be possible to modify an identified curved surface shapes or angle of edges by displacing the visual representation of the user's hand. However, this method has some limitations in that we can not adapt the way of touching with two or more fingers by visually displacing the user's hand. To solve this problem, we need to not only displace the visual representation of the user's hand but also deform it. Hence, in this paper, we focus on modifying the identification of the size of objects handled with two fingers. This was achieved by deforming the visual representation of the user's hand in order to construct a novel visuo-haptic system. We devised a video see-through system, which enables us to change the perception of the shape of an object that a user is visually touching. The visual representation of the user's hand is deformed as if the user were handling a visual object, when in actuality the user is handling an object of another size. Using this system we performed an experiment to investigate the effects of visuo-haptic interaction and evaluated its effectiveness. The result showed that the perceived size of objects handled with a thumb and other finger(s) could be modified if the difference between the size of physical and visual stimuli was in the range from -40% to 35%. This indicates that our method can be applied to visuo-haptic shape display system that we proposed.
  • Item
    Indoor Tracking for Large Area Industrial Mixed Reality
    (The Eurographics Association, 2012) Scheer, Fabian; Müller, Stefan; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
    For mixed reality (MR) applications the tracking of a video camera in a rapidly changing large environment with several hundred square meters still represents a challenging task. In contrast to an installation in a laboratory, industrial scenarios like a running factory, require minimal setup, calibration or training times of a tracking system and merely minimal changes of the environment. This paper presents a tracking system to compute the pose of a video camera mounted on a mobile carriage like device in very large indoor environments, consisting of several hundred square meters. The carriage is equipped with a touch sensitive monitor to display a live augmentation. The tracking system is based on an infrared laser device, that detects at least three out of a few retroreflective targets in the environment and compares actual target measurements with a precalibrated 2D target map. The device passes a 2D position and orientation. To obtain a six degree of freedom (DOF) pose a coordinate system adjustment method is presented, that determines the transformation between the 2D laser tracker and the image sensor of a camera. To analyse the different error sources leading to the overall error the accuracy of the system is evaluated in a controlled laboratory setup. Beyond that, an evaluation of the system in a large factory building is shown, as well as the application of the system for industrial MR discrepancy checks of complete factory buildings. Finally, the utility of the 2D scanning capabilities of the laser in conjuction with a virtually generated 2D map of the 3D model of a factory is demonstrated for MR discrepancy checks.
  • Item
    Towards Interacting with Force-Sensitive Thin Deformable Virtual Objects
    (The Eurographics Association, 2012) Hummel, Johannes; Wolff, Robin; Dodiya, Janki; Gerndt, Andreas; Kuhlen, Torsten; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
    The selection of the right input devices for 3D interaction methods is important for a successful VR system. While natural direct interaction is often preferred, research has shown that indirect interaction can be beneficial. This paper focuses on an immersive simulation and training environment, in which one sub-task it is to carefully grasp and move a force-sensitive thin deformable foil without damaging it. In order to ensure transfer of training it was necessary to inform the user of the fact of gentle grasping and moving the foil. We explore the potential of three simple and light-weight interaction methods that each map interaction to a virtual hand in a distinct way. We used a standard tracked joystick with an indirect mapping, a standard finger tracking device with direct mapping based on finger position, and a novel enhanced finger tracking device, which additionally allowed pinch force input. The results of our summative user study show that the task performance did not show a significant difference among the three interaction methods. The simple position based mapping using finger tracking was most preferred, although the enhanced finger tracking device with direct force input offered the most natural interaction mapping. Our findings show that both a direct and indirect input method have potential to interact with force-sensitive thin deformable objects, while the direct method is preferred.
  • Item
    Fast Motion Rendering for Single-Chip Stereo DLP Projectors
    (The Eurographics Association, 2012) Lancelle, Marcel; Voß, Gerrit; Fellner, Dieter W.; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
    Single-chip color DLP projectors show the red, green and blue components one after another. When the gaze moves relative to the displayed pixels, color fringes are perceived. In order to reduce these artefacts, many devices show the same input image twice at the double rate, i.e. a 60Hz source image is displayed with 120Hz. Consumer stereo projectors usually work with time interlaced stereo, allowing to address each of these two images individually. We use this so called 3D mode for mono image display of fast moving objects. Additionally, we generate a separate image for each individual color, taking the display time offset of each color component into account. With these 360 images per second we can strongly reduce ghosting, color fringes and jitter artefacts on fast moving objects tracked by the eye, resulting in sharp objects with smooth motion. Real-time image generation at such a high frame rate can only be achieved for simple scenes or may only be possible by severely reducing quality. We show how to modify a motion blur post processing shader to render only 60frames
  • Item
    Networked Displays for VR Applications: Display as a Service
    (The Eurographics Association, 2012) Löffler, Alexander; Pica, Luciano; Hoffmann, Hilko; Slusallek, Philipp; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
    Stereoscopic Liquid Crystal Displays (LCDs) in a tiled setup, so-called display walls, are rising as a replacement for the classic projection-based systems for Virtual Reality (VR) applications. They have numerous benefits over projectors, the only drawback being their maximum size, which is why VR applications usually resort to using tiled display walls. Problems of display walls are the obvious bezels between single displays making up the wall and, most importantly, the complicated pipeline to display synchronized content across all participating screens. This becomes especially crucial when we are dealing with active-stereo content, where precisely timed display of the left and right stereo channels across the entire display area is essential. Usually, these scenarios require a variety of expensive, specialized hardware, which makes it difficult for such wall setups to spread more widely. In this paper, we present our service-oriented architecture Display as a Service (DaaS), which uses a virtualization approach to shift the problem of pixel distribution from specialized hardware to a generic software. DaaS provides network-transparent virtual framebuffers (VFBs) for pixel-producing applications to write into and virtual displays (VDs), which potentially span multiple physical displays making up a display wall, to present generated pixels on. Our architecture assumes network-enabled displays with integrated processing capabilities, such that all communication for pixel transport and synchronization between VFBs and VDs can happen entirely over IP networking using standard video streaming and Internet protocols. We show the feasibility of our approach in a heterogeneous use case scenario, evaluate latency and synchronization accuracy, and give an outlook for more potential applications in the field of VR.
  • Item
    3D User Interfaces Using Tracked Multi-touch Mobile Devices
    (The Eurographics Association, 2012) Wilkes, Curtis B.; Tilden, Dan; Bowman, Doug A.; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
    Multi-touch mobile devices are becoming ubiquitous due to the proliferation of smart phone platforms such as the iPhone and Android. Recent research has explored the use of multi-touch input for 3D user interfaces on displays including large touch screens, tablets, and mobile devices. This research explores the benefits of adding six-degree-of-freedom tracking to a multi-touch mobile device for 3D interaction. We analyze and propose benefits of using tracked multi-touch mobile devices (TMMDs) with the goal of developing effective interaction techniques to handle a variety of tasks within immersive 3D user interfaces. We developed several techniques using TMMDs for virtual object manipulation, and compared our techniques to existing best-practice techniques in a series of user studies. We did not, however, find performance advantages for TMMD-based techniques. We discuss our observations and propose alternate interaction techniques and tasks that may benefit from TMMDs.
  • Item
    Floor-based Audio-Haptic Virtual Collision Responses
    (The Eurographics Association, 2012) Blom, Kristopher J.; Haringer, Matthias; Beckhaus, Steffi; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
    Virtual collisions are considered an important aspect of creating effective travel interactions for virtual environments; yet, they are not yet well understood. We introduce a new floor based audio-haptic interface for providing virtual collision feedback, the soundfloor. With this device, haptic feedback can be provided through the floor of a projection VR system, without disturbing the visual presentation on the same floor. As the impact of feedback is not yet known for virtual travel, we also present a series of experiments that compare different feedback methods coupled with classic collision handling methods. The results of the experiments show only limited benefits of collision handling and of additional feedback for performance. However, user preference of context appropriate feedback is evident, as well as a preference for the floor based haptic feedback. The experiments provide evidence of best practices for handling virtual travel collisions, namely that context appropriate feedback should be preferred and that quality sounds are sufficient when haptics cannot be provided.
  • Item
    Comparing Auditory and Haptic Feedback for a Virtual Drilling Task
    (The Eurographics Association, 2012) Rausch, Dominik; Aspöck, Lukas; Knott, Thomas; Pelzer, Sönke; Vorländer, Michael; Kuhlen, Torsten; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
    While visual feedback is dominant in Virtual Environments, the use of other modalities like haptics and acoustics can enhance believability, immersion, and interaction performance. Haptic feedback is especially helpful for many interaction tasks like working with medical or precision tools. However, unlike visual and auditory feedback, haptic reproduction is often difficult to achieve due to hardware limitations. This article describes a user study to examine how auditory feedback can be used to substitute haptic feedback when interacting with a vibrating tool. Participants remove some target material with a round-headed drill while avoiding damage to the underlying surface. In the experiment, varying combinations of surface force feedback, vibration feedback, and auditory feedback are used. We describe the design of the user study and present the results, which show that auditory feedback can compensate the lack of haptic feedback.
  • Item
    Redirected Steering for Virtual Self-Motion Control with a Motorized Electric Wheelchair
    (The Eurographics Association, 2012) Fiore, Loren Puchalla; Phillips, Lane; Bruder, Gerd; Interrante, Victoria; Steinicke, Frank; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
    Redirection techniques have shown great potential for enabling users to travel in large-scale virtual environments while their physical movements have been limited to a much smaller laboratory space. Traditional redirection approaches introduce a subliminal discrepancy between real and virtual motions of the user by subtle manipulations, which are thus highly dependent on the user and on the virtual scene. In the worst case, such approaches may result in failure cases that have to be resolved by obvious interventions, e. g., when a user faces a physical obstacle and tries to move forward. In this paper we introduce a remote steering method for redirection techniques that are used for physical transportation in an immersive virtual environment. We present a redirection controller for turning a legacy wheelchair device into a remote control vehicle. In a psychophysical experiment we analyze the automatic angular motion redirection with our proposed controller with respect to detectability of discrepancies between real and virtual motions. Finally, we discuss this redirection method with its novel affordances for virtual traveling.
  • Item
    Trade-Offs Related to Travel Techniques and Level of Display Fidelity in Virtual Data-Analysis Environments
    (The Eurographics Association, 2012) Ragan, Eric D.; Wood, Andrew; McMahan, Ryan P.; Bowman, Doug A.; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
    Because effective navigation in 3D virtual environments (VEs) depends on the specifics of the travel techniques and the display system, we compared two travel techniques (steering and target-based) and two display conditions-a high-fidelity setup (a four-wall display with stereo and head-tracking) and a lower-fidelity setup (a single wall display without stereo or head-tracking). In a controlled experiment, we measured performance on travel-intensive data analysis tasks in a complex underground cave environment. The results suggest that steering may be better suited for high-fidelity immersive VEs, and target-based navigation may offer advantages for less immersive systems. The study also showed significantly worse simulator sickness with higher display fidelity, with an interaction trend suggesting that this effect was intensified by steering.
  • Item
    An Empiric Evaluation of Confirmation Methods for Optical See-Through Head-Mounted Display Calibration
    (The Eurographics Association, 2012) Maier, Patrick; Dey, Arindam; Waechter, Christian A. L.; Sandor, Christian; Tönnis, Marcus; Klinker, Gudrun; Ronan Boulic and Carolina Cruz-Neira and Kiyoshi Kiyokawa and David Roberts
    The calibration of optical see-through head-mounted displays (OSTHMDs) is an important fundament for correct object alignment in augmented reality. Any calibration process for OSTHMDs requires users to align 2D points in screen space with 3D points and to confirm each alignment. In this paper, we investigate how different confirmation methods affect calibration quality. By an empiric evaluation, we compared four confirmation methods: Keyboard, Hand-held, Voice, and Waiting. We let users calibrate with a video see-through head-mounted display. This way, we were able to record videos of the alignments in parallel. Later image processing provided baseline alignments for comparison against the user generated ones. Our results provide design constraints for future calibration procedures. The Waiting method, designed to reduce head motion during confirmation, showed a significantly higher accuracy than all other methods. Averaging alignments over a time frame improved the accuracy of all methods further more. We validated our results by numerically comparing the user generated projection matrices with calculated ground truth projection matrices. The findings were also observed by several calibration procedures performed with an OSTHMD.