Self-Paced Deep Reinforcement Learning

Pascal Klink, Carlo D'Eramo, Jan Peters, Joni Pajarinen

Tutkimustuotos: Artikkeli kirjassa/konferenssijulkaisussaConference contributionScientificvertaisarvioitu

19 Sitaatiot (Scopus)

Abstrakti

Curriculum reinforcement learning (CRL) improves the learning speed and stability of an agent by exposing it to a tailored series of tasks throughout learning. Despite empirical successes, an open question in CRL is how to automatically generate a curriculum for a given reinforcement learning (RL) agent, avoiding manual design. In this paper, we propose an answer by interpreting the curriculum generation as an inference problem, where distributions over tasks are progressively learned to approach the target task. This approach leads to an automatic curriculum generation, whose pace is controlled by the agent, with solid theoretical motivation and easily integrated with deep RL algorithms. In the conducted experiments, the curricula generated with the proposed algorithm significantly improve learning performance across several environments and deep RL algorithms, matching or outperforming state-of-the-art existing CRL algorithms.
AlkuperäiskieliEnglanti
OtsikkoProceedings of the 34th Conference on Neural Information Processing Systems, NeurIPS 2020
KustantajaMorgan Kaufmann Publishers
Sivumäärä12
TilaJulkaistu - 2020
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaConference on Neural Information Processing Systems - Virtual, Vancouver, Kanada
Kesto: 6 jouluk. 202012 jouluk. 2020
Konferenssinumero: 34

Julkaisusarja

NimiAdvances in neural information processing systems
KustantajaMorgan Kaufmann Publishers
Vuosikerta33
ISSN (painettu)1049-5258

Conference

ConferenceConference on Neural Information Processing Systems
LyhennettäNeurIPS
Maa/AlueKanada
KaupunkiVancouver
Ajanjakso06/12/202012/12/2020

Sormenjälki

Sukella tutkimusaiheisiin 'Self-Paced Deep Reinforcement Learning'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä