DFKI-LT - Realizing Multimodal Behavior
Realizing Multimodal Behavior
1 Proceedings of the 10th International Conference on Intelligent Virtual Agents,
Generating coordinated multimodal behavior for an embod- ied agent (speech, gesture, facial expression. . . ) is challenging. It requires a high degree of animation control, in particular when reactive behaviors are required. We suggest to distinguish realization planning, where ges- ture and speech are processed symbolically using the behavior markup language (BML), and presentation which is controlled by a lower level animation language (EMBRScript). Reactive behaviors can bypass plan- ning and directly control presentation. In this paper, we show how to define a behavior lexicon, how this lexicon relates to BML and how to resolve timing using formal constraint solvers. We conclude by demonstrating how to integrate reactive emotional behaviors.
Files: BibTeX, kipp_etal2010.pdf