Skip to main content Skip to main navigation


Learning to Fly via Deep Model-Based Reinforcement Learning

Philip Becker-Ehmck; Maximilian Karl; Jan Peters; Patrick van der Smagt
In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2003.08876, Pages 0-10, arXiv, 2020.


Learning to control robots without requiring engineered models has been a long-term goal, promising diverse and novel applications. Yet, reinforcement learning has only achieved limited impact on real-time robot control due to its high demand of real-world interactions. In this work, by leveraging a learnt probabilistic model of drone dynamics, we learn a thrust-attitude controller for a quadrotor through model-based reinforcement learning. No prior knowledge of the flight dynamics is assumed; instead, a sequential latent variable model, used generatively and as an online filter, is learnt from raw sensory input. The controller and value function are optimised entirely by propagating stochastic analytic gradients through generated latent trajectories. We show that "learning to fly" can be achieved with less than 30 minutes of experience with a single drone, and can be deployed solely using onboard computational resources and sensors, on a self-built drone.

Weitere Links