Learning to Play Imperfect-Information Games by Imitating an Oracle Planner

Rinu Boney, Alexander Ilin, Juho Kannala, Jarno Seppanen

Tutkimustuotos: LehtiartikkeliArticleScientificvertaisarvioitu

2 Sitaatiot (Scopus)
63 Lataukset (Pure)

Abstrakti

We consider learning to play multiplayer imperfect-information games with simultaneous moves and large state-action spaces. Previous attempts to tackle such challenging games have largely focused on model-free learning methods, often requiring hundreds of years of experience to produce competitive agents. Our approach is based on model-based planning. We tackle the problem of partial observability by first building an (oracle) planner that has access to the full state of the environment and then distilling the knowledge of the oracle to a (follower) agent which is trained to play the imperfect-information game by imitating the oracle's choices. We experimentally show that planning with naive Monte Carlo tree search performs poorly in large combinatorial action spaces. We therefore propose planning with a fixed-depth tree search and decoupled Thompson sampling for action selection. We show that the planner is able to discover efficient playing strategies in the games of Clash Royale and Pommerman and the follower policy successfully learns to implement them by training on few hundred battles.

AlkuperäiskieliEnglanti
Sivut262-272
Sivumäärä11
JulkaisuIEEE Transactions on Games
Vuosikerta14
Numero2
Varhainen verkossa julkaisun päivämäärä2021
DOI - pysyväislinkit
TilaJulkaistu - 2022
OKM-julkaisutyyppiA1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä

Sormenjälki

Sukella tutkimusaiheisiin 'Learning to Play Imperfect-Information Games by Imitating an Oracle Planner'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä