Scheduling conditional task graphs with deep reinforcement learning

Anton Debner*, Maximilian Krahn, Vesa Hirvisalo

*Corresponding author for this work

Research output: Contribution to journalConference articleScientificpeer-review

14 Downloads (Pure)

Abstract

Industrial applications often depend on costly computation infrastructures. Well optimised schedulers provide cost efficient utilization of these computational resources, but they can take significant effort to implement. It can also be beneficial to split the application into a hierarchy of tasks represented as a conditional task graph. In such case, the tasks in the hierarchy are conditionally executed, depending on the output of the earlier tasks. While such conditional task graphs can save computational resources, they also add complexity to scheduling. Recently, there has been research on Deep Reinforcement Learning (DRL) based schedulers, but they mostly do not address conditional task graphs. We design a DRL based scheduler for conditional task graphs in a heterogeneous execution environment. We measure how the probabilities of a conditional task graph affects the scheduler and how these adverse effects can be mitigated. We show that our solution learns to beat traditional baseline schedulers in a fraction of an hour.

Original languageEnglish
Pages (from-to)1-7
Number of pages7
JournalProceedings of Machine Learning Research
Volume233
Publication statusPublished - 2024
MoE publication typeA4 Conference publication
EventNorthern Lights Deep Learning Conference - Tromso, Norway
Duration: 9 Jan 202411 Jan 2024
Conference number: 5

Fingerprint

Dive into the research topics of 'Scheduling conditional task graphs with deep reinforcement learning'. Together they form a unique fingerprint.

Cite this