Navigating in virtual environments using a vision-based interface
2004 (English)In: Proceedings of the third Nordic conference on Human-computer interaction, 2004, 113-120 p.Conference paper (Refereed)
Interacting and navigating virtual environments usually requires a wired interface, game console, or keyboard. The advent of perceptual interface techniques allows a new option: the passive and untethered sensing of users’ pose and gesture to allow them to maneuver through and manipulate virtual worlds. We describe new algorithms for interacting with 3-D environments using real-time articulated body tracking with standard cameras and personal computers. Our method is based on rigid stereo-motion estimation algorithms and can accurately track upper body pose in real-time. With our tracking system users can navigate virtual environments using 3-D gesture and body poses. We analyze the space of possible perceptual interface abstractions for full-body navigation, and present a prototype system based on these results. We finally describe an initial evaluation of our prototype system with users guiding avatars through a series of 3-D virtual game worlds.
Place, publisher, year, edition, pages
2004. 113-120 p.
, ACM International Conference Proceeding Series, 82
input and interaction technologies, perceptive user interface, virtual reality and 3D interfaces, vision-based interface
Human Computer Interaction Computer Engineering Other Computer and Information Science
IdentifiersURN: urn:nbn:se:kth:diva-77735DOI: 10.1145/1028014.1028033ISBN: 1581138571ISBN: 978-158113857-3OAI: oai:DiVA.org:kth-77735DiVA: diva2:501868
3rd Nordic Conference on Human-Computer Interaction, NordiCHI 2004. Tampere. 23 October 2004 - 27 October 2004
QC 201202292012-02-142012-02-072012-02-29Bibliographically approved