DFKI-LT - A Multimodal Listener Behaviour Driven by Audio Input
A Multimodal Listener Behaviour Driven by Audio Input
2 International Workshop on Interacting with ECAs as Virtual Characters, Toronto, Ontario, Canada, o.A., 2010
Our aim is to build a platform allowing a user to chat with virtual agent. The agent displays audio-visual backchannels as a response to the user's verbal and nonverbal behaviours. Our system takes as inputs the audio-visual signals of the user and outputs synchronously the audio-visual behaviours of the agent. In this paper, we describe the SEMAINE architecture and the data flow that goes from inputs (audio and video) to outputs (voice synthesizer and virtual characters), going through analysers and interpreters. We focus, more particularly, on the multimodal behaviour of the listener model driven by audio input.
Files: BibTeX, aamas2010_desevin.pdf