Context-based Multimodal Output for Human-Robot Collaboration

Magdalena Kaiser, Christian Bürckert

In: 2018 11th Conference Human System Interaction. International Conference on Human System Interaction (HSI-2018) July 4-6 Danzig Poland ISBN 978-1-5386-5024-0 IEEE 2018.


Research on multimodal systems for human-robot interaction mostly focuses on the processing of inputs. Yet, the output is equally important: A robot that is able to use different modalities in an interaction appears more natural and can be understood more easily. In this paper, we present our multimodal fission framework, called MMF framework, which is a framework for incorporating planning criteria to select the most suitable set of modalities based on information about the in- teraction context. We describe our input and output layer, present an algorithm for an automated selection of suitable attributes for referencing objects verbally as well as a simple assessment of the suitability of pointing gestures in the given context. Furthermore, we describe a new approach for the modality and device selection as formulation of constraint optimization problems. In the end, we will report the results of a user study, which has been conducted to evaluate the generated multimodal output.

German Research Center for Artificial Intelligence
Deutsches Forschungszentrum für Künstliche Intelligenz