Multi-Agent Reinforcement Learning Approach Scheduling for In-X Subnetworks

Tutkimustuotos: Artikkeli kirjassa/konferenssijulkaisussaConference article in proceedingsScientificvertaisarvioitu

38 Lataukset (Pure)

Abstrakti

We consider radio resource scheduling in a network
of multiple non-coordinated in-X subnetworks which move with
respect to each other. Each subnetwork is controlled by an
independent agent, scheduling resources to devices within the
subnetwork. The only information about decisions of other agents
is through interference measurements which are non-stationary
due to subnetwork mobility and fast fading effects. The agents
aim is to serve the devices in their subnetwork with a fixed data
rate and a high reliability. The problem is cast as a multi-agent
non-stationary Markov Decision Process (MDP), with unknown
transition functions. We approach the problem via Multi-Agent
Deep Reinforcement Learning (DRL), leveraging Long Short
Term Memory (LSTM) networks to handle the non-stationarity
and Deep Deterministic Policy Gradient (DDPG) to manage highdimensional
continuous action spaces. Candidate actions given
by DRL are quantized to discrete actions by a novel binary tree
search method subject to reliability constraints. Simulation results
indicate that the proposed LSTM-based DRL scheduling strategy
outperforms strategies based on Feed Forward Neural Networks,
Centralized Training with Decentralized Execution approaches
found in the literature, and conventional heuristic approaches.
AlkuperäiskieliEnglanti
OtsikkoProceedings of the IEEE Vehicular Technology Conference
KustantajaIEEE
Sivumäärä7
TilaHyväksytty/In press - 2024
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa

Sormenjälki

Sukella tutkimusaiheisiin 'Multi-Agent Reinforcement Learning Approach Scheduling for In-X Subnetworks'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä