Reinforcement Learning for Physical Layer Communications

Philippe Mary, Visa Koivunen, Christophe Moy

Research output: Chapter in Book/Report/Conference proceedingChapterScientificpeer-review

Abstract

In this chapter, we will give comprehensive examples of applying RL in optimizing the physical layer of wireless communications by defining different class of problems and the possible solutions to handle them. In Section 9.2, we present all the basic theory needed to address a RL problem, i.e. Markov decision process (MDP), Partially observable Markov decision process (POMDP), but also two very important and widely used algorithms for RL, i.e. the Q-learning and SARSA algorithms. We also introduce the deep reinforcement learning (DRL) paradigm and the section ends with an introduction to the multi-armed bandits (MAB) framework. Section 9.3 focuses on some toy examples to illustrate how the basic concepts of RL are employed in communication systems. We present applications extracted from literature with simplified system models using similar notation as in Section 9.2 of this Chapter. In Section 9.3, we also focus on modeling RL problems, i.e. how action and state spaces and rewards are chosen. The Chapter is concluded in Section 9.4 with a prospective thought on RL trends and it ends with a review of a broader state of the art in Section 9.5.
Original languageEnglish
Title of host publicationMachine Learning and Wireless Communications
PublisherCambridge University Press
ISBN (Print)978-1-108-83298-4
Publication statusPublished - Aug 2022
MoE publication typeA3 Book section, Chapters in research books

Fingerprint

Dive into the research topics of 'Reinforcement Learning for Physical Layer Communications'. Together they form a unique fingerprint.

Cite this