Speech and Gesture Multimodal Control of a Whole Earth 3D Visualization Environment
dc.contributor.author | Krum, David M. | en_US |
dc.contributor.author | Omoteso, Olugbenga | en_US |
dc.contributor.author | Ribarsky, William | en_US |
dc.contributor.author | Starner, Thad | en_US |
dc.contributor.author | Hodges, Larry F. | en_US |
dc.contributor.editor | D. Ebert and P. Brunet and I. Navazo | en_US |
dc.date.accessioned | 2014-01-30T06:50:45Z | |
dc.date.available | 2014-01-30T06:50:45Z | |
dc.date.issued | 2002 | en_US |
dc.description.abstract | A growing body of research shows several advantages to multimodal interfaces including increased expressiveness, flexibility, and user freedom. This paper investigates the design of such an interface that integrates speech and hand gestures. The interface has the additional property of operating relative to the user and can be used while the user is in motion or standing at a distance from the computer display. The paper then describes an implementation of the multimodal interface for a whole Earth 3D visualization which presents navigation interface challenges due to the large magnitude of scale and extended spaces that are available. The characteristics of the multimodal interface are examined, such as speed, recognizability of gestures, ease and accuracy of use, and learnability under likely conditions of use. This implementation shows that such a multimodal interface can be effective in a real environment and sets some parameters for the design and use of such interfaces. | en_US |
dc.description.seriesinformation | Eurographics / IEEE VGTC Symposium on Visualization | en_US |
dc.identifier.isbn | 1-58113-536-X | en_US |
dc.identifier.issn | 1727-5296 | en_US |
dc.identifier.uri | https://doi.org/10.2312/VisSym/VisSym02/195-200 | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | Speech and Gesture Multimodal Control of a Whole Earth 3D Visualization Environment | en_US |
Files
Original bundle
1 - 1 of 1