Skip to main content Skip to main navigation


Learning to transfer optimal navigation policies

Kristian Kersting; Christian Plagemann; Alexandru Cocora; Wolfram Burgard; Luc De Raedt
In: Advanced Robotics, Vol. 21, No. 13, Pages 1565-1582, Taylor & Francis Online, 2007.


Autonomous agents that act in the real world utilizing sensory input greatly rely on the ability to plan their actions and to transfer these skills across tasks. The majority of path-planning approaches for mobile robots, however, solve the current navigation problem from scratch, given the current and goal configuration of the robot. Consequently, these approaches yield highly efficient plans for the specific situation, but the computed policies typically do not transfer to other, similar tasks. In this paper, we propose to apply techniques from statistical relational learning to the path-planning problem. More precisely, we propose to learn relational decision trees as abstract navigation strategies from example paths. Relational abstraction has several interesting and important properties. First, it allows a mobile robot to imitate navigation behavior shown by users or by optimal policies. Second, it yields comprehensible models of behavior. Finally, a navigation policy learned in one environment naturally transfers to unknown environments. In several experiments with real robots and in simulated runs, we demonstrate that our approach yields efficient navigation plans. We show that our system is robust against observation noise and can outperform hand-crafted policies.

Weitere Links