학술논문

Optimal Energy Scheduling of a Microgrid based on Offline-to-Online Deep Reinforcement Learning
Document Type
Conference
Source
2024 6th International Conference on Energy Systems and Electrical Power (ICESEP) Energy Systems and Electrical Power (ICESEP), 2024 6th International Conference on. :1088-1092 Jun, 2024
Subject
Components, Circuits, Devices and Systems
Power, Energy and Industry Applications
Robotics and Control Systems
Renewable energy sources
Heuristic algorithms
Microgrids
Deep reinforcement learning
Solids
Safety
Energy management
microgrids
energy management
offline-to-online reinforcement learning
Language
Abstract
With the increasing penetration of renewable energy sources in microgrids, optimizing energy management becomes more complex. Classical online deep reinforcement learning (DRL) algorithms often initiate actions in an unsafe manner, potentially leading to equipment damage or resource waste. To enhance safety and feasibility in microgrid scheduling, this study proposes an Offline-to-Online with Expert Data (OOED) DRL, which initializes effective reinforcement learning (RL) control policies by imitating excellent expert behavior without initial interaction, then refines them through online interaction, thereby enhancing efficiency. Experimental results show that OOED outperforms the offline or even online RL by delivering superior solutions in dynamic microgrids environments.