학술논문

On the Benefits of Transfer Learning and Reinforcement Learning for Electric Short-term Load Forecasting
Document Type
Conference
Source
2022 IEEE International Conferences on Internet of Things (iThings) and IEEE Green Computing & Communications (GreenCom) and IEEE Cyber, Physical & Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics) ITHINGS-GREENCOM-CPSCOM-SMARTDATA-CYBERMATICS Internet of Things (iThings) and IEEE Green Computing & Communications (GreenCom) and IEEE Cyber, Physical & Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics), 2022 IEEE International Conferences. :195-203 Aug, 2022
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Signal Processing and Analysis
Load forecasting
Computational modeling
Buildings
Transfer learning
Training data
Predictive models
Prediction algorithms
Time-series
short-term load forecasting
reinforcement learning
transfer learning
Language
Abstract
Accurate short-term load forecasting plays an essential role in effective modern power system operations. Recently, various deep learning based time-series forecasting algorithms have shown superior performances. Usually, the existing time-series forecasting algorithms require an adequate amount of training samples to learn a reliable prediction model. However, in some real-world scenarios, we might only have limited training samples, i.e., learning to predict electric load in a newly built neighborhood. Under such strict constraints, both classical and deep learning based time-series forecasting algorithms suffer from high prediction errors and over-fitting problems due to the limited training data. On the other hand, in the real world, we may have a large amount of historical data collected from other buildings which could be helpful to learn the forecasting model. Therefore, in this work, we propose to tackle the short-term residential electric load forecasting problem from a transfer learning perspective. The goal is to use the large amount of historical data from other source buildings to learn reliable forecasting models for the target building which only has limited training data. In particular, we first use the Autoformer, a state-of-the-art (SOTA) transformer-based time-series forecasting algorithm, to learn a forecasting model from each source building, respectively. Then, we leverage the benefit of the reinforcement learning algorithm to select the learned forecasting models to make prediction on the target building, named Time-Series Double DQN (TS-DDQN). To validate the efficacy of the proposed method, we conduct extensive experiments on different real-world datasets. Experimental results show that TS-DDQN can consistently outperform baseline algorithms by a large margin.