Backpropagation Through Agents

Tutkimustuotos: LehtiartikkeliConference articleScientificvertaisarvioitu

Abstrakti

A fundamental challenge in multi-agent reinforcement learning (MARL) is to learn the joint policy in an extremely large search space, which grows exponentially with the number of agents. Moreover, fully decentralized policy factorization significantly restricts the search space, which may lead to sub-optimal policies. In contrast, the auto-regressive joint policy can represent a much richer class of joint policies by factorizing the joint policy into the product of a series of conditional individual policies. While such factorization introduces the action dependency among agents explicitly in sequential execution, it does not take full advantage of the dependency during learning. In particular, the subsequent agents do not give the preceding agents feedback about their decisions. In this paper, we propose a new framework Back-Propagation Through Agents (BPTA) that directly accounts for both agents’ own policy updates and the learning of their dependent counterparts. This is achieved by propagating the feedback through action chains. With the proposed framework, our Bidirectional Proximal Policy Optimisation (BPPO) outperforms the state-of-the-art methods. Extensive experiments on matrix games, StarCraftII v2, Multiagent MuJoCo, and Google Research Football demonstrate the effectiveness of the proposed method.

AlkuperäiskieliEnglanti
Sivut13718-13726
Sivumäärä9
JulkaisuProceedings of the AAAI Conference on Artificial Intelligence
Vuosikerta38
Numero12
DOI - pysyväislinkit
TilaJulkaistu - 25 maalisk. 2024
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaAAAI Conference on Artificial Intelligence - Vancouver, Kanada
Kesto: 20 helmik. 202427 helmik. 2024
Konferenssinumero: 38

Sormenjälki

Sukella tutkimusaiheisiin 'Backpropagation Through Agents'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä