DFKI-LT - Producing believeable robot gaze when comprehending visually situated dialogue
Producing believeable robot gaze when comprehending visually situated dialogue
2 Language and Robots: Proceedings from the Symposium (LangRo'2007), Aveiro, Portugal, University of Aveiro, 12/2007
The paper presents an implemented approach to producing robot gaze during comprehending visually situated dialogue. The approach is based on an incremental model for processing situated dialogue. In this model, utterance interpretations are build step-by-step, in a "left-to-right" fashion. At each step, grammatical and dialogue-level information is combined with information about the visually situated context. As a consequence, utterance processing can be guided so as to construct only situationally appropriate interpretations. Furthermore, at each step a set of visual referents is determined, to which the unfolding utterance meaning is currently making reference. In the approach, this information is used to drive robot gaze, letting the robot change its fixation onto the most recent visual referent. The underlying assumption is that gaze behavior helps to establish joint attention ("common ground") in a dialogue, if there is congruency between where the robot is looking, and what the (intended) visual referent is. The paper reports on a pilot study in which this assumption is studied. The results show statistically significant interactions between congruence, believability, and appropriateness of referring expression.
Files: BibTeX, main.gaze.langro2007.pdf