Deep reinforcement learning for fuel cost optimization in district heating

Jifei Deng, Miro Eklund, Seppo Sierla, Jouni Savolainen, Hannu Niemistö, Tommi Karhela, Valeriy Vyatkin

Research output: Contribution to journalArticleScientificpeer-review

1 Citation (Scopus)
18 Downloads (Pure)


This study delves into the application of deep reinforcement learning (DRL) frameworks for optimizing setpoints in district heating systems, which experience hourly fluctuations in air temperature, customer demand, and fuel
prices. The potential for energy conservation and cost reduction through setpoint optimization, involving adjustments to supply temperature and thermal energy storage utilization, is significant. However, the inherent
nonlinear complexities of the system render conventional manual methods ineffective. To address these challenges, we introduce a novel learning framework with an expert knowledge module tailored for DRL techniques.
The framework leverages system status information to facilitate learning. The training is performed by employing model-free DRL methods and a refined digital twin of the Espoo district heating system. The expert
module, accounting for power plant capacities, ensures actionable directives aligned with operational feasibility. Empirical validation through comprehensive simulations demonstrates the efficacy of the proposed approach. Comparative analyses against manual methods and evolutionary techniques highlight the approach’s superior ability to curtail fuel costs. This study advances the understanding of DRL in district heating optimization, offering a promising avenue for enhanced energy efficiency and cost savings.
Original languageEnglish
Article number104955
Number of pages10
JournalSustainable Cities and Society
Early online date21 Sept 2023
Publication statusPublished - Dec 2023
MoE publication typeA1 Journal article-refereed


Dive into the research topics of 'Deep reinforcement learning for fuel cost optimization in district heating'. Together they form a unique fingerprint.

Cite this