Skip to main content Skip to main navigation


Stable reinforcement learning with autoencoders for tactile and visual data

Herke van Hoof; Nutan Chen; Maximilian Karl; Patrick van der Smagt; Jan Peters
In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS-2016), October 9-14, Daejeon, Korea, Democratic People's Republic of, Pages 3928-3934, IEEE, 2016.


For many tasks, tactile or visual feedback is helpful or even crucial. However, designing controllers that take such high-dimensional feedback into account is non-trivial. Therefore, robots should be able to learn tactile skills through trial and error by using reinforcement learning algorithms. The input domain for such tasks, however, might include strongly correlated or non-relevant dimensions, making it hard to specify a suitable metric on such domains. Auto-encoders specialize in finding compact representations, where defining such a metric is likely to be easier. Therefore, we propose a reinforcement learning algorithm that can learn non-linear policies in continuous state spaces, which leverages representations learned using auto-encoders. We first evaluate this method on a simulated toy-task with visual input. Then, we validate our approach on a real-robot tactile stabilization task.

Weitere Links