On-body IE: A Head-Mounted Multimodal Augmented Reality System for Learning and Recalling Faces

Daniel Sonntag, Takumi Toyama

In: 9th International Conference on Intelligent Environments. International Conference on Intelligent Environments (IE-13) 9th July 18-19 Athens Greece IEEE 2013.


We present a new augmented reality (AR) system for knowledge-intensive location-based expert work. The multi-modal interaction system combines multiple on-body input and output devices: a speech-based dialogue system, a head-mounted augmented reality display (HMD), and a head-mounted eye-tracker. The interaction devices have been selected to augment and improve the expert work in a specific medical application context which shows its potential. In the sensitive domain of examining patients in a cancer screening program we try to combine several active user input devices in the most convenient way for both the patient and the doctor. The resulting multimodal AR is an on-body intelligent environment (IE) and has the potential to yield higher performance outcomes and provides a direct data acquisition control mechanism. It leverages the doctor's capabilities of recalling the specific patient context by a virtual, context-based patient-specific """"external brain"""" for the doctor which can remember patient faces and adapts the virtual augmentation according to the specific patient observation and finding context. In addition, patient data can be displayed on the HMD -- triggered by voice or object/patient recognition. The learned (patient) faces and immovable objects (e.g., a big medical device) define the environmental clues to make the context-dependent recognition model part of the IE to achieve specific goals for the doctors in the hospital routine.

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence