Skip navigation.

Invited talks:
Interactive Communication for Autonomous Intelligent Robots

Kolja Kühnlenz (Technical University Munich)
The Autonomous City Explorer: Experiences from a recent test trial in the city center of Munich

Abstract

Future personal robots in everyday real-world settings will have to face the challenge that there will always be knowledge gaps. A priori knowledge may not be available in all situations and learning requires trials, which also may not be feasible in any case. In order to overcome such drawbacks, we believe that a crucial capability of tomorrow's robot assistants will be to assess their knowledge towards gaps and to be able to fill those by interaction with humans.

In this talk, recent results of the Autonomous City Explorer (ACE) project will be presented. In this project, an autonomous robot managed to find its 1,5km way from the main campus of TU Munich to the city center of Munich by asking pedestrians for directions. ACE was developed in the context of a pilot project exploring the feasibility of personal assistance robots in terms of human acceptance, which are capable of extending their knowledge not only by means of cognition but also by means of humanlike communication in real-world settings. To fill gaps in its directional knowledge, ACE is capable of actively approaching humans and initiating interaction situations, retrieving directions from human pointing gestures and converting this information into an algorithmic plan, which finally is executable in terms of conventional means of robot navigation.

About the speaker

Kolja Kühnlenz is currently a Senior Lecturer at the Institute of Automatic Control Engineering (LSR) and Carl von Linde Junior Fellow at the Institute for Advanced Study, Technische Universität München, Munich, Germany. He is director of the Dynamic Vision Research Laboratory at LSR with currently 7 PhD students.

His research interests include Robot Vision, Visual Servoing, High-Speed Vision, Attention, Bio-inspired Vision, Humanoid Robots, Human-Robot Interaction, Emotions, and Sociable Systems &endash; with a strong focus on real-world applications of (social) robots.

Britta Wrede (Bielefeld University)
From explicit to implicit communication: is alignment the solution?

Abstract

In recent years the theory of grounding – according to which participants explicitely negotiate what they have understood and thus build a common ground – has been challenged by the idea of a mechanistic view of understanding, Alignment. The latter idea is based on the observation that in task-oriented interactions communication partners tend to align their surface representations (e.g. lexical or syntactic choice) in an implicit way which apparently helps to align their underlying situation models and thus facilitates mutual understanding.

In this talk, Britta Wrede will present some experimental analyses of human-robot interaction where misunderstandings occur that are often caused by implicit signals from the robot which are interpreted by the human in a communicative way. It will be discussed if such implicit mechanisms of understanding can be useful in human-robot interaction.

About the speaker

Britta Wrede is head of the research group Hybrid Society within the Institute for Cognition and Robotics (CoR-Lab) at Bielefeld University. She received her Masters degree in Computational Linguistics and the Ph.D. degree (Dr.-Ing.) in computer science from Bielefeld University in 1999 and 2002, respectivley. From 2002 till 2003 she pursued a PostDoc program of the DAAD at the speech group of the International Computer Science Institute (ICSI) in Berkeley, USA. In 2003 she rejoined the Applied Informatics Group at Bielefeld University and was involved in several EU and national (DFG, BMBF) projects. Since 2008 she is heading her own research group at the CoR-Lab.

Her research interests include speech recognition, prosodic and acoustic speech analysis for propositional and affective processing, and dialog modeling as well as human-robot interaction. Her current research focuses on the integration of multi-modal information as a basis to bootstrap speech and action learning in a tutoring scenario.