Skip to main content Skip to main navigation


HJB optimal feedback control with deep differential value functions and action constraints

Michael Lutter; Boris Belousov; Kim Listmann; Debora Clever; Jan Peters
In: Leslie Pack Kaelbling; Danica Kragic; Komei Sugiura (Hrsg.). 3rd Annual Conference on Robot Learning. Conference on Robot Learning (CoRL-2019), October 30 - November 1, Osaka, Japan, Pages 640-650, Proceedings of Machine Learning Research (PMLR), Vol. 100, PMLR, 2019.


Learning optimal feedback control laws capable of executing optimal trajectories is essential for many robotic applications. Such policies can be learned using reinforcement learning or planned using optimal control. While reinforcement learning is sample inefficient, optimal control only plans an optimal trajectory from a specific starting configuration. In this paper we propose HJB control to learn an optimal feedback policy rather than a single trajectory using principles from optimal control. By exploiting the inherent structure of the robot dynamics and strictly convex action cost, we derive principled cost functions such that the optimal policy naturally obeys the action limits, is globally optimal and stable on the training domain given the optimal value function. The corresponding optimal value function is learned end-to-end by embedding a deep differential network in the Hamilton-Jacobi-Bellmann differential equation and minimizing the error of this equality while simultaneously decreasing the discounting from short- to far-sighted to enable the learning. Our proposed approach enables us to learn an optimal feedback control law in continuous time, that in contrast to existing approaches generates an optimal trajectory from any point in state-space without the need of replanning. The resulting approach is evaluated on non-linear systems and achieves optimal feedback control, where standard optimal control methods require frequent replanning.

Weitere Links