Vision-Based Location-Awareness in Augmented Reality Applications

Daniel Sonntag; Takumi Toyama

In: 3rd Workshop on Location Awareness for Mixed and Dual Reality (LAMDa’13). International Conference on Intelligent User Interfaces (IUI-13), March 19-22, Santa Monica, CA, USA, ACM Press, 2013.


We present an integral HCI approach that incorporates eye-gaze for location-awareness in real-time. A new augmented reality (AR) system for knowledge-intensive location-based work combines multiple on-body input and output devices: a speech-based dialogue system, a head-mounted AR display (HMD), and a head-mounted eye-tracker. The interaction devices have been selected to augment and improve the navigation on a hospital’s premises (outdoors and indoors, figure 1) which shows its potential. We focus on the eye-tracker interaction which provides cues for location-awareness.

2013_Vision-Based_Location-Awareness_in_Augmented_Reality_Applications.pdf (pdf, 420 KB )

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence