KOR

e-Article

A Cloud-Edge Collaboration Solution for Distribution Network Reconfiguration Using Multi-Agent Deep Reinforcement Learning
Document Type
Article
Source
IEEE Transactions on Power Systems; 2024, Vol. 39 Issue: 2 p3867-3879, 13p
Subject
Language
ISSN
08858950; 15580679
Abstract
Network reconfiguration can maintain the optimal operation of distribution network with increasing penetration of distributed generations (DGs). However, network reconfiguration problems may not be solved quickly by traditional methods in large-scale distribution networks. In this context, a cloud-edge collaboration framework based on multi-agent deep reinforcement learning (MADRL) is proposed, where the MADRL model can be trained centrally in the cloud center and decentrally executed in edge servers to reduce the training cost and execution latency of MADRL. In addition, a discrete multi-agent soft actor-critic algorithm (MASAC) is introduced as the basic algorithm to address the non-stationary environment problem in MADRL. Then, online safe learning and offline safe learning are combined for the distribution network reconfiguration task in practice to update the neural networks of MADRL under constraints. Specifically, a novel offline algorithm called multi-agent constraints penalized Q-learning (MACPQ) is proposed to reduce the cost of trial-and-error process of MADRL while allowing agents to pre-train their policies from a historical dataset considering constraints. Meanwhile, a new online MADRL method called primal-dual MASAC is proposed to further improve the performance of agents by directly interacting with the physical distribution network under the safe action exploration. Finally, the superiority of the proposed methods is verified in IEEE 33-bus system and a practical 445-bus system.