학술논문

Data-Efficient Deep Reinforcement Learning-Based Optimal Generation Control in DC Microgrids
Document Type
Periodical
Source
IEEE Systems Journal Systems Journal, IEEE. 18(1):426-437 Mar, 2024
Subject
Components, Circuits, Devices and Systems
Computing and Processing
Microgrids
Training
Costs
Optimal control
Real-time systems
Voltage control
Fans
Centralized training distributed execution
deep reinforcement learning (DRL)
data-efficient
nonconvex system
optimal generation control
Language
ISSN
1932-8184
1937-9234
2373-7816
Abstract
Because of their simplicity and great energy-utilizing efficiency, dc microgrids are gaining popularity as an attractive option for the optimal operation of numerous distributed energy resources. The optimal power flow issue's nonlinearity and nonconvexity make it difficult to apply and develop the conventional control approach directly. With the development of machine learning in recent years, deep reinforcement learning (DRL) has been developed for solving such complex optimal control problems. This article proposes a DRL-based TD3 optimal control scheme to achieve the optimal generation control for dc microgrids. The generation cost of distributed generators is minimized, and the significant boundaries, such as generation bounds and the bus voltage bounds, are both guaranteed. The proposed approach connects the optimal control and reinforcement learning frameworks with centralized training and distributed execution structure. Case studies showed that reinforcement learning algorithms might optimize nonlinear and nonconvex systems with fast dynamics by utilizing particular reward function designs, data sampling, and constraint management strategies. In addition, producing the experience replay buffer before training drastically lowers learning failure, enhancing the data efficiency of the DRL process.