Skip to main content Skip to main navigation


A Multimodal Teach-in Approach to the Pick-and-Place Problem in Human-Robot Collaboration

Niko Kleer; Maurice Rekrut; Julian Wolter; Tim Schwartz; Michael Feld
In: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. ACM/IEEE International Conference on Human-Robot Interaction (HRI-2023), March 13-16, Stockholm, Sweden, Pages 81-85, ISBN 978-1-4503-9970-8, ACM, 2023.


Teaching robotic systems how to carry out a task in a collaborative environment still presents a challenge. This is because replicating natural human-to-human interaction requires the availability of interaction modalities that allow conveying complex information. Speech, gestures, gaze-based interactions as well as directly guiding a robotic system count towards such modalities that yield the potential to enable smooth multimodal human-robot interaction. This paper presents a conceptual approach for multimodally teaching a robotic system how to pick-and-place an object, one of the fundamental tasks not only in robotics, but in everyday life. By establishing task and dialogue model separately, we aim to split robot/task logic from interaction logic and to achieve modality independence for the teaching interaction. Finally, we elaborate on an experimental implementation of our models for multimodally teaching a UR-10 robot arm how to pick-and-place an object.


Weitere Links