WIP: The Automatic Synthesis of Multimodal PresentationsElisabeth Andre; Wolfgang Finkler; Winfried Graf; Thomas Rist; Anne Schauder; Wolfgang Wahlster
DFKI, DFKI Research Reports (RR), Vol. 92-46, 1992.
Due to the growing complexity of information that has to be communicated by current AI systems, there comes an increasing need for building advanced intelligent user interfaces that take advantage of a coordinated combination of different modalities, e.g., natural language, graphics, and animation, to produce situated and user-adaptive prsentations. A deeper understanding og the basic principles underlying multimodal communication requires theoretical work on computational models as well as practical work on concrete systems. In this article, we describe the szstem WIP, an implemented prototype of a knowledge-based presentation system that generates illustrated texts that are customized for the intended audience and situation. We present the architecture of WIP and introduce as its major components the presentation planner, the layout manager, and the generators for text and graphics. To achieve a coherent output with an optimal media mix, the single components have to be interleaved. The interplay of the presentation planner, the text and the graphics generator will be demonstrated by means of a system run. In particular, we show how a text-picture combination containing a crossmodal referring expression is generated by the system.