학술논문

Multi-Agent Graph Reinforcement Learning Method for Electric Vehicle on-Route Charging Guidance in Coupled Transportation Electrification
Document Type
Periodical
Source
IEEE Transactions on Sustainable Energy IEEE Trans. Sustain. Energy Sustainable Energy, IEEE Transactions on. 15(2):1180-1193 Apr, 2024
Subject
Power, Energy and Industry Applications
Geoscience
Computing and Processing
Charging stations
Electric vehicle charging
Transportation
Power systems
Power distribution
Reinforcement learning
Real-time systems
Electric vehicle
integrated transportation electrification system
charging guidance
multi-agent graph reinforcement learning
Language
ISSN
1949-3029
1949-3037
Abstract
This paper proposes a multi-agent deep graph reinforcement learning-based EV on-route charging guidance strategy, aiming at minimizing the charging cost for EV drivers in an uncertain and complex environment. First, a real-time online EV charging guidance framework with bi-timescales coupled transportation electrification is proposed. On the slow timescale, the distribution locational marginal pricing of the node where the charging station is located is solved by the power purchase cost optimization of the power distribution network. On the fast timescale, multi-agent deep reinforcement learning is used to solve real-time EV requests. Second, charging stations are considered as agents, and the potential competition for future charging demand at charging stations is considered. A multi-agent actor critic algorithm with embedded graph attention networks is proposed to optimize charging decision-making of EV drivers, which exploits the interactions between agents’ observations using graph attention networks. Case studies are carried out in a practical area of Xi'an, China. We analyze the necessity of model components. The effectiveness of the proposed approach in reducing charging cost and its applicability in PV-equipped scenarios is verified. The convergence performance and scalability are verified by the comparison with SAC and MADDPG algorithms.