DFKI-LT - A System for Facial Expression-based Affective Speech Translation

Zeeshan Ahmed, Ingmar Steiner, Éva Székely, Julie Carson-Berndsen
A System for Facial Expression-based Affective Speech Translation
1 ACM International Conference on Intelligent User Interfaces (IUI), Pages 57-58, Santa Monica, CA, USA, ACM, ACM, 3/2013
 
In the emerging field of speech-to-speech translation, emphasis is currently placed on the linguistic content, while the significance of paralinguistic information conveyed by facial expression or tone of voice is typically neglected. We present a prototype system for multimodal speech-to-speech translation that is able to automatically recognize and translate spoken utterances from one language into another, with the output rendered by a speech synthesis system. The novelty of our system lies in the technique of generating the synthetic speech output in one of several expressive styles that is automatically determined using a camera to analyze the user's facial expression during speech.
 
Files: BibTeX, 2451176.2451197