Curriculum reinforcement learning via constrained optimal transport

Pascal Klink, Haoyi Yang, Carlo D'Eramo, Joni Pajarinen, Jan Peters

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

12 Citations (Scopus)
44 Downloads (Pure)


Curriculum reinforcement learning (CRL) allows solving complex tasks by generating a tailored sequence of learning tasks, starting from easy ones and subsequently increasing their difficulty. Although the potential of curricula in RL has been clearly shown in a variety of works, it is less clear how to generate them for a given learning environment, resulting in a variety of methods aiming to automate this task. In this work, we focus on the idea of framing curricula as interpolations between task distributions, which has previously been shown to be a viable approach to CRL. Identifying key issues of existing methods, we frame the generation of a curriculum as a constrained optimal transport problem between task distributions. Benchmarks show that this way of curriculum generation can improve upon existing CRL methods, yielding high performance in a variety of tasks with different characteristics.
Original languageEnglish
Title of host publicationProceedings of the 39th International Conference on Machine Learning
Number of pages18
Publication statusPublished - 2022
MoE publication typeA4 Conference publication
EventInternational Conference on Machine Learning - Baltimore, United States
Duration: 17 Jul 202223 Jul 2022
Conference number: 39

Publication series

NameProceedings of Machine Learning Research
ISSN (Electronic)2640-3498


ConferenceInternational Conference on Machine Learning
Abbreviated titleICML
Country/TerritoryUnited States


Dive into the research topics of 'Curriculum reinforcement learning via constrained optimal transport'. Together they form a unique fingerprint.

Cite this