Skip to main content Skip to main navigation


Curriculum Reinforcement Learning via Constrained Optimal Transport

Pascal Klink; Haoyi Yang; Carlo D'Eramo; Jan Peters; Joni Pajarinen
In: Kamalika Chaudhuri; Stefanie Jegelka; Le Song; Csaba Szepesvári; Gang Niu; Sivan Sabato (Hrsg.). International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA. International Conference on Machine Learning (ICML), Pages 11341-11358, Proceedings of Machine Learning Research, Vol. 162, PMLR, 2022.


Curriculum reinforcement learning (CRL) allows solving complex tasks by generating a tailored sequence of learning tasks, starting from easy ones and subsequently increasing their difficulty. Although the potential of curricula in RL has been clearly shown in a variety of works, it is less clear how to generate them for a given learning environment, resulting in a variety of methods aiming to automate this task. In this work, we focus on the idea of framing curricula as interpolations between task distributions, which has previously been shown to be a viable approach to CRL. Identifying key issues of existing methods, we frame the generation of a curriculum as a constrained optimal transport problem between task distributions. Benchmarks show that this way of curriculum generation can improve upon existing CRL methods, yielding high performance in a variety of tasks with different characteristics.

Weitere Links