DFKI-LT - Using multimodal speech production data to evaluate articulatory animation for audiovisual speech synthesis

Ingmar Steiner, Korin Richmond, Slim Ouni
Using multimodal speech production data to evaluate articulatory animation for audiovisual speech synthesis
in: Michael Pucher, Darren Cosker, Gregor Hofer, Michael Berger, William Smith (eds.):
1 ACM 3rd International Symposium on Facial Analysis and Animation (FAA), Vienna, Austria, ACM, FTW, 9/2012
 
The importance of modeling speech articulation for high-quality audiovisual (AV) speech synthesis is widely acknowledged. Nevertheless, while state-of-the-art, data-driven approaches to facial animation can make use of sophisticated motion capture techniques, the animation of the intraoral articulators (viz. the tongue, jaw, and velum) typically makes use of simple rules or viseme morphing, in stark contrast to the otherwise high quality of facial modeling. Using appropriate speech production data could significantly improve the quality of articulatory animation for AV synthesis.
 
Files: BibTeX, 2491599.2491601, abstract.pdf