Sustained Emotionally coloured Machine-human Interaction using Nonverbal Expression

The aim of the SEMAINE project is to build a Sensitive Artificial Listener – a multimodal dialogue system with the social interaction skills needed for a sustained conversation with a human user. The system will emphasise “soft” communication skills, i.e. non-verbal, social and emotional perception, interaction and behaviour capabilities. The Sensitive Artificial Listener paradigm involves only very limited verbal capabilities, but has been shown to be suited for prolonged human-machine interaction. In this paradigm, we will build a real-time, robust interactive system perceiving a human user's facial expression, gaze, and voice, and engaging with the user through an Embodied Conversational Agent's body, face and voice. The agent will exhibit audiovisual listener feedback in real time while the user is speaking, and will take the user's feedback into account while the agent is speaking. The agent will pursue different dialogue strategies depending on the user's state; it will learn to interpret the user's non-verbal behaviour and adapt its own behaviour accordingly. Data to train system components will be collected initially using a Wizard-of-Oz setup, later on using the autonomous system at increasing levels of maturity. Some of the data will be released to the research community.

Consortium partners are involved in several standardisation initiatives related to building socially competent interactive agents; these standards will be co-shaped by work in SEMAINE.

Funded by:European Union (FP7-ICT)
Project Manager:Marc Schröder (Marc.Schroeder@dfki.de)
Contact:Marc Schröder (Marc.Schroeder@dfki.de)
Duration: 01.01.2008 - 31.12.2010
Partners:GermanyDFKI (coordinator),
Northern IrelandQueen's University Belfast,
EnglandImperial College, London,
NetherlandsUniversity of Twente,
FranceUniversité de Paris VIII,
GermanyTU München