Self-Paced Deep Reinforcement Learning

Pascal Klink, Carlo D'Eramo, Jan Peters, Joni Pajarinen

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

Curriculum reinforcement learning (CRL) improves the learning speed and stability of an agent by exposing it to a tailored series of tasks throughout learning. Despite empirical successes, an open question in CRL is how to automatically generate a curriculum for a given reinforcement learning (RL) agent, avoiding manual design. In this paper, we propose an answer by interpreting the curriculum generation as an inference problem, where distributions over tasks are progressively learned to approach the target task. This approach leads to an automatic curriculum generation, whose pace is controlled by the agent, with solid theoretical motivation and easily integrated with deep RL algorithms. In the conducted experiments, the curricula generated with the proposed algorithm significantly improve learning performance across several environments and deep RL algorithms, matching or outperforming state-of-the-art existing CRL algorithms.
Original languageEnglish
Title of host publicationProceedings of the 34th Conference on Neural Information Processing Systems, NeurIPS 2020
Number of pages12
Publication statusE-pub ahead of print - 2020
MoE publication typeA4 Article in a conference publication
EventIEEE Conference on Neural Information Processing Systems; - Virtual, Vancouver, Canada
Duration: 6 Dec 202012 Dec 2020
Conference number: 34

Publication series

NameAdvances in neural information processing systems
PublisherMorgan Kaufmann Publishers
Volume33
ISSN (Print)1049-5258

Conference

ConferenceIEEE Conference on Neural Information Processing Systems;
Abbreviated titleNeurIPS
CountryCanada
CityVancouver
Period06/12/202012/12/2020

Fingerprint Dive into the research topics of 'Self-Paced Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this