Abstract
A wide range of reinforcement learning (RL) algorithms have been proposed, in which agents learn from interactions with a simulated environment. Executing such RL training loops is computationally expensive, but current RL systems fail to support the training loops of different RL algorithms efficiently on GPU clusters: they either hard-code algorithm-specific strategies for parallelization and distribution; or they accelerate only parts of the computation on GPUs (e.g., DNN policy updates). We observe that current systems lack an abstraction that decouples the definition of an RL algorithm from its strategy for distributed execution.
We describe MSRL, a distributed RL training system that uses the new abstraction of a fragmented dataflow graph (FDG) to execute RL algorithms in a flexible way. An FDG is a heterogenous dataflow representation of an RL algorithm, which maps functions from the RL training loop to independent parallel dataflow fragments. Fragments account for the diverse nature of RL algorithms: each fragment can execute on a different device through a low-level dataflow implementation, e.g., an operator graph of a DNN engine, a CUDA GPU kernel, or a multi-threaded CPU process. At deployment time, a distribution policy governs how fragments are mapped to devices, without requiring changes to the RL algorithm implementation. Our experiments show that MSRL exposes trade-offs between different execution strategies, while surpassing the performance of existing RL systems with fixed execution strategies.
We describe MSRL, a distributed RL training system that uses the new abstraction of a fragmented dataflow graph (FDG) to execute RL algorithms in a flexible way. An FDG is a heterogenous dataflow representation of an RL algorithm, which maps functions from the RL training loop to independent parallel dataflow fragments. Fragments account for the diverse nature of RL algorithms: each fragment can execute on a different device through a low-level dataflow implementation, e.g., an operator graph of a DNN engine, a CUDA GPU kernel, or a multi-threaded CPU process. At deployment time, a distribution policy governs how fragments are mapped to devices, without requiring changes to the RL algorithm implementation. Our experiments show that MSRL exposes trade-offs between different execution strategies, while surpassing the performance of existing RL systems with fixed execution strategies.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2023 USENIX Annual Technical Conference |
Publisher | USENIX -The Advanced Computing Systems Association |
Pages | 977-993 |
ISBN (Electronic) | 978-1-939133-35-9 |
Publication status | Published - 2023 |
MoE publication type | A4 Conference publication |
Event | USENIX Annual Technical Conference - Boston, United States Duration: 10 Jul 2023 → 12 Jul 2023 https://www.usenix.org/conference/atc23 |
Conference
Conference | USENIX Annual Technical Conference |
---|---|
Abbreviated title | ATC |
Country/Territory | United States |
City | Boston |
Period | 10/07/2023 → 12/07/2023 |
Internet address |