The department Cognitive Assistants develops the basics for multi-modal human-computer interaction and personalized dialogue systems integrating speech, gestures, and facial expressions with physical interaction. In the process, user, task and domain models are used to design dialog patterns that are as natural and robust as possible so as to be understandable, even in group discussions or loud environments. By integrating virtual characters even emotions and socially interactive behavior can be achieved as an output. Chatbots, for example, provide a slim interface to access multimodal dialogue technologies.
As more and more new types of devices and interaction technologies are finding their way into our daily lives, one challenge is to view these networked “systems of systems of systems” as an intelligent, cyber-physical environment and to make them usable for humans in their entirety. The large variety of sensors and actuators should be intuitive and multimodal for the mobile user at any time, whereby the environment adapts to the user’s needs in a multi-adaptive way.