It is widely believed that future automatic services will come with interfaces that support conversational interaction. The interaction with devices and services will be as easy and natural as talking to a friend or an assistant. In face-to-face communication we use all our senses: we speak to each other, we see facial expressions, hand gestures, sketches and words scribbled with a pen, etc. Face-to-face-interaction is multimodal. In order to offer conversational interaction, future automatic services will be multimodal, which means that computers will be able to understand speech and typed text, recognize gestures, facial expressions and body posture of the human interlocutor, and that the computer can use the same communication channels, next to presenting graphics, to render in its responses.
COMIC starts from the assumption that multimodal interaction with computers should be firmly based on generic cognitive models for multimodal interaction. Much fundamental research is still needed in order to base multimodal interaction on the understanding of generic cognitive principles that form the basis of this type of interaction. COMIC will build a number of demonstrators to evaluate the applicability of the cognitive models in the domains of eWork and eCommerce.
- Max-Planck Institute for Psycholinguistics (Konsortialleitung)
- DFKI GmbH
- Max-Planck-Institute for Biological Cybernetics
- University of Edinburgh
- University of Nijmegen
- University of Sheffield
- ViSoft GmbH