학술논문

A Reinforcement Learning-Based Decision System for Electricity Pricing Plan Selection by Smart Grid End Users
Document Type
Periodical
Source
IEEE Transactions on Smart Grid IEEE Trans. Smart Grid Smart Grid, IEEE Transactions on. 12(3):2176-2187 May, 2021
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Power, Energy and Industry Applications
Pricing
Companies
Prediction algorithms
Energy consumption
Electricity supply industry
Smart grids
Indexes
Smart grid end user
decision system
electricity market
value-based Q learning
demand response
Language
ISSN
1949-3053
1949-3061
Abstract
With the development of deregulated retail power markets, it is possible for end users equipped with smart meters and controllers to optimize their consumption cost portfolios by choosing various pricing plans from different retail electricity companies. This article proposes a reinforcement learning-based decision system for assisting the selection of electricity pricing plans, which can minimize the electricity payment and consumption dissatisfaction for individual smart grid end user. The decision problem is modeled as a transition probability-free Markov decision process (MDP) with improved state framework. The proposed problem is solved using a Kernel approximator-integrated batch Q-learning algorithm, where some modifications of sampling and data representation are made to improve the computational and prediction performance. The proposed algorithm can extract the hidden features behind the time-varying pricing plans from a continuous high-dimensional state space. Case studies are based on data from real-world historical pricing plans and the optimal decision policy is learned without a priori information about the market environment. Results of several experiments demonstrate that the proposed decision model can construct a precise predictive policy for individual user, effectively reducing their cost and energy consumption dissatisfaction.