DFKI-LT - Salience-driven Contextual Priming of Speech Recognition for Human-Robot Interaction
Salience-driven Contextual Priming of Speech Recognition for Human-Robot Interaction
2 Proceedings of ECAI 2008, Athens, Greece, ECAI, 7/2008
The paper presents an implemented model for priming speech recognition, using contextual information about salient entities. The underlying hypothesis is that, in human-robot interaction, speech recognition performance can be improved by exploiting knowledge about the immediate physical situation and the dialogue history. To this end, visual salience (objects perceived in the physical scene) and linguistic salience (objects, events already mentioned in the dialogue) are integrated into a single cross-modal salience model. The model is dynamically updated as the environment changes. It is used to establish expectations about which words are most likely to be heard in the given context. The update is realised by continuously adapting the word-class probabilities specified in a statistical language model. The paper discusses the motivations behind the approach, and presents the implementation as part of a cognitive architecture for mobile robots. Evaluation results on a test suite show a statistically significant improvement of salience-driven priming speech recognition (WER) over a commercial baseline system.
Files: BibTeX, main.sitASR.ecai08.pdf