Publikation

Expressive speech synthesis in MARY TTS using audiobook data and EmotionML

Marcela Charfuelan Oliva, Ingmar Steiner

In: Proceedings of Interspeech 2013. Conference in the Annual Series of Interspeech Events (INTERSPEECH-2013) 14th August 25-29 Lyon France ISCA 8/2013.

Abstrakt

This paper describes a framework for synthesis of expressive speech based on MARY TTS and Emotion Markup Language (EmotionML). We describe the creation of expressive unit selection and HMM-based voices using audiobook data labelled according to voice styles. Audiobook data is labelled/split according to voice styles by principal component analysis (PCA) of acoustic features extracted from segmented sentences. We introduce the implementation of EmotionML in MARY TTS and explain how it is used to represent and control expressivity in terms of discrete emotions or emotion dimensions. Preliminary results on perception of different voice styles are presented. Index Terms: speech synthesis, unit selection, parametric speech synthesis, expressive speech, EmotionML, signal processing

Projekte

marytts_emotionml_final.pdf (pdf, 178 KB )

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence