Abstrakti

Relative overgeneralization (RO) occurs in cooperative multi-agent learning tasks when agents converge towards a suboptimal joint policy due to overfitting to suboptimal behaviors of other agents.No methods have been proposed for addressing RO in multi-agent policy gradient (MAPG) methods although these methods produce state-of-the-art results.To address this gap, we propose a general, yet simple, framework to enable optimistic updates in MAPG methods that alleviate the RO problem.Our approach involves clipping the advantage to eliminate negative values, thereby facilitating optimistic updates in MAPG.The optimism prevents individual agents from quickly converging to a local optimum.Additionally, we provide a formal analysis to show that the proposed method retains optimality at a fixed point.In extensive evaluations on a diverse set of tasks including the Multi-agent MuJoCo and Overcooked benchmarks, our method outperforms strong baselines on 13 out of 19 tested tasks and matches the performance on the rest.

AlkuperäiskieliEnglanti
Sivut61186-61202
Sivumäärä17
JulkaisuProceedings of Machine Learning Research
Vuosikerta235
TilaJulkaistu - 2024
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaInternational Conference on Machine Learning - Vienna, Itävalta
Kesto: 21 heinäk. 202427 heinäk. 2024
Konferenssinumero: 41

Sormenjälki

Sukella tutkimusaiheisiin 'Optimistic Multi-Agent Policy Gradient'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä