Reader: Model-based language-instructed reinforcement learning

Tutkimustuotos: Artikkeli kirjassa/konferenssijulkaisussaConference article in proceedingsScientificvertaisarvioitu

77 Lataukset (Pure)

Abstrakti

We explore how we can build accurate world models, which are partially specified by language, and how we can plan with them in the face of novelty and uncertainty. We propose the first model-based reinforcement learning approach to tackle the environment Read To Fight Monsters (Zhong et al., 2019), a grounded policy learning problem. In RTFM an agent has to reason over a set of rules and a goal, both described in a language manual, and the observations, while taking into account the uncertainty arising from the stochasticity of the environment, in order to generalize successfully its policy to test episodes. We demonstrate the superior performance and sample efficiency of our model-based approach to the existing model-free SOTA agents in eight variants of RTFM. Furthermore, we show how the agent’s plans can be inspected, which represents progress towards more interpretable agents.
AlkuperäiskieliEnglanti
OtsikkoProceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
ToimittajatHouda Bouamor, Juan Pino, Kalika Bali
KustantajaAssociation for Computational Linguistics
Sivut16583–16599
ISBN (painettu)979-8-89176-060-8
TilaJulkaistu - 2023
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaConference on Empirical Methods in Natural Language Processing - Singapore, Singapore
Kesto: 6 jouluk. 202310 jouluk. 2023

Conference

ConferenceConference on Empirical Methods in Natural Language Processing
LyhennettäEMNLP
Maa/AlueSingapore
KaupunkiSingapore
Ajanjakso06/12/202310/12/2023

Sormenjälki

Sukella tutkimusaiheisiin 'Reader: Model-based language-instructed reinforcement learning'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä