Toward Multi-Agent Reinforcement Learning for Distributed Event-Triggered Control

Lukas Kesper, Sebastian Trimpe, Dominik Baumann

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

1 Citation (Scopus)
69 Downloads (Pure)

Abstract

Event-triggered communication and control provide high control performance in networked control systems without overloading the communication network. However, most approaches require precise mathematical models of the system dynamics, which may not always be available. Model-free learning of communication and control policies provides an alternative. Nevertheless, existing methods typically consider single-agent settings. This paper proposes a model-free reinforcement learning algorithm that jointly learns resource-aware communication and control policies for distributed multi-agent systems from data. We evaluate the algorithm in a high-dimensional and nonlinear simulation example and discuss promising avenues for further research.
Original languageEnglish
Title of host publicationProceedings of the Learning for Dynamics and Control Conference
PublisherJMLR
Pages1072-1085
Number of pages14
Volume211
Publication statusPublished - 1 Jun 2023
MoE publication typeA4 Conference publication
EventLearning for Dynamics and Control Conference - University of Pennsylvania, Philadelphia, United States
Duration: 14 Jun 202316 Jun 2023
Conference number: 5
https://l4dc.seas.upenn.edu/

Publication series

NameProceedings of Machine Learning Research
ISSN (Electronic)2640-3498

Conference

ConferenceLearning for Dynamics and Control Conference
Abbreviated titleL4DC
Country/TerritoryUnited States
CityPhiladelphia
Period14/06/202316/06/2023
Internet address

Keywords

  • Electrical Engineering and Systems Science - Systems and Control

Fingerprint

Dive into the research topics of 'Toward Multi-Agent Reinforcement Learning for Distributed Event-Triggered Control'. Together they form a unique fingerprint.

Cite this