Abstrakti
Offline reinforcement learning, by learning from a fixed dataset, makes it possible to learn agent behaviors without interacting with the environment. However, depending on the quality of the offline dataset, such pre-trained agents may have limited performance and would further need to be fine-tuned online by interacting with the environment. During online fine-tuning, the performance of the pre-trained agent may collapse quickly due to the sudden distribution shift from offline to online data. While constraints enforced by offline RL methods such as a behaviour cloning loss prevent this to an extent, these constraints also significantly slow down online fine-tuning by forcing the agent to stay close to the behavior policy. We propose to adaptively weigh the behavior cloning loss during online fine-tuning based on the agent's performance and training stability. Moreover, we use a randomized ensemble of Q functions to further increase the sample efficiency of online fine-tuning by performing a large number of learning updates. Experiments show that the proposed method yields state-of-the-art offline-to-online reinforcement learning performance on the popular D4RL benchmark.
Alkuperäiskieli | Englanti |
---|---|
Otsikko | Proceedings of the European Symposium on Artificial Neural Networks, 2022 |
Kustantaja | European Symposium on Artificial Neural Networks (ESANN) |
Sivumäärä | 6 |
ISBN (elektroninen) | 9782875870841 |
DOI - pysyväislinkit | |
Tila | Julkaistu - 2022 |
OKM-julkaisutyyppi | A4 Artikkeli konferenssijulkaisussa |
Tapahtuma | European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning - Bruges, Belgia Kesto: 5 lokak. 2022 → 7 lokak. 2022 Konferenssinumero: 30 |
Conference
Conference | European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning |
---|---|
Lyhennettä | ESANN |
Maa/Alue | Belgia |
Kaupunki | Bruges |
Ajanjakso | 05/10/2022 → 07/10/2022 |