TY - JOUR
T1 - Dynamic Hierarchical Reinforcement Learning Framework for Energy-Efficient 5G Base Stations in Urban Environments
AU - Xu, Dianlei
AU - Su, Xiang
AU - Premsankar, Gopika
AU - Wang, Huandong
AU - Tarkoma, Sasu
AU - Hui, Pan
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - The energy consumption of 5G base stations (BSs) is significantly higher than that of 4G BSs, creating challenges for operators due to increased costs and carbon emissions. Existing solutions address this issue by switching off BSs during specific periods or forming cooperation coalitions where some BSs deactivate while others serve users. However, these approaches often rely on fixed geographic configurations, making them unsuitable for urban areas with numerous BSs and mobile users. To tackle these challenges, we propose a hierarchical reinforcement learning (RL) framework for energy conservation in large-scale 5G networks. In the upper-layer, we propose a deep Q-network integrated with a graph convolutional network that dynamically groups BSs into coalitions from a macro perspective. This layer focuses on high-level coalition formation to optimize system-wide energy efficiency by considering the global state of the network. In the lower-layer, we combine attention mechanism with multi-agent RL and graph convolutional networks to design a scalable algorithm that maximizes local energy efficiency through optimizing the cooperation within each coalition. These two layers align global coalition dynamics with local intra-coalition cooperation to achieve system-wide energy optimization. Moreover, we accurately model large-scale urban 5G scenarios leveraging a high-fidelity network simulator, which enables our RL framework to learn from real-world feedback. Extensive experiments conducted with the simulator demonstrate that our proposed framework achieves remarkable energy savings of up to 75.6%, significantly outperforming baseline approaches. These findings highlight the effectiveness and superiority of our hierarchical RL optimization framework in addressing the energy consumption challenges faced by large-scale 5G networks.
AB - The energy consumption of 5G base stations (BSs) is significantly higher than that of 4G BSs, creating challenges for operators due to increased costs and carbon emissions. Existing solutions address this issue by switching off BSs during specific periods or forming cooperation coalitions where some BSs deactivate while others serve users. However, these approaches often rely on fixed geographic configurations, making them unsuitable for urban areas with numerous BSs and mobile users. To tackle these challenges, we propose a hierarchical reinforcement learning (RL) framework for energy conservation in large-scale 5G networks. In the upper-layer, we propose a deep Q-network integrated with a graph convolutional network that dynamically groups BSs into coalitions from a macro perspective. This layer focuses on high-level coalition formation to optimize system-wide energy efficiency by considering the global state of the network. In the lower-layer, we combine attention mechanism with multi-agent RL and graph convolutional networks to design a scalable algorithm that maximizes local energy efficiency through optimizing the cooperation within each coalition. These two layers align global coalition dynamics with local intra-coalition cooperation to achieve system-wide energy optimization. Moreover, we accurately model large-scale urban 5G scenarios leveraging a high-fidelity network simulator, which enables our RL framework to learn from real-world feedback. Extensive experiments conducted with the simulator demonstrate that our proposed framework achieves remarkable energy savings of up to 75.6%, significantly outperforming baseline approaches. These findings highlight the effectiveness and superiority of our hierarchical RL optimization framework in addressing the energy consumption challenges faced by large-scale 5G networks.
KW - 5G
KW - attention mechanism
KW - energy conservation
KW - Hierarchical reinforcement learning
KW - large-scale network simulation
KW - multi-agent reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=105001861871&partnerID=8YFLogxK
U2 - 10.1109/TMC.2025.3557280
DO - 10.1109/TMC.2025.3557280
M3 - Article
AN - SCOPUS:105001861871
SN - 1536-1233
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
ER -