Understanding the Evolution of Linear Regions in Deep Reinforcement Learning

Setareh Cohan*, Nam Hee Kim, David Rolnick, Michiel van de Panne

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

Abstract

Policies produced by deep reinforcement learning are typically characterised by their learning curves, but they remain poorly understood in many other respects. ReLU-based policies result in a partitioning of the input space into piecewise linear regions. We seek to understand how observed region counts and their densities evolve during deep reinforcement learning using empirical results that span a range of continuous control tasks and policy network dimensions. Intuitively, we may expect that during training, the region density increases in the areas that are frequently visited by the policy, thereby affording fine-grained control. We use recent theoretical and empirical results for the linear regions induced by neural networks in supervised learning settings for grounding and comparison of our results. Empirically, we find that the region density increases only moderately throughout training, as measured along fixed trajectories coming from the final policy. However, the trajectories themselves also increase in length during training, and thus the region densities decrease as seen from the perspective of the current trajectory. Our findings suggest that the complexity of deep reinforcement learning policies does not principally emerge from a significant growth in the complexity of functions observed on-and-around trajectories of the policy.
Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 35 (NeurIPS 2022)
EditorsS. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh
PublisherMorgan Kaufmann Publishers
Number of pages13
ISBN (Print)9781713871088
Publication statusPublished - 2022
MoE publication typeA4 Conference publication
EventConference on Neural Information Processing Systems - New Orleans, United States
Duration: 28 Nov 20229 Dec 2022
Conference number: 36
https://nips.cc/

Publication series

NameAdvances in Neural Information Processing Systems
PublisherMorgan Kaufmann Publishers
Volume35
ISSN (Print)1049-5258

Conference

ConferenceConference on Neural Information Processing Systems
Abbreviated titleNeurIPS
Country/TerritoryUnited States
CityNew Orleans
Period28/11/202209/12/2022
Internet address

Fingerprint

Dive into the research topics of 'Understanding the Evolution of Linear Regions in Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this