학술논문

On Distributed Model-Free Reinforcement Learning Control With Stability Guarantee
Document Type
Periodical
Source
IEEE Control Systems Letters IEEE Control Syst. Lett. Control Systems Letters, IEEE. 5(5):1615-1620 Nov, 2021
Subject
Robotics and Control Systems
Computing and Processing
Components, Circuits, Devices and Systems
Feedback control
Power system stability
Eigenvalues and eigenfunctions
Decision making
Computational modeling
Mathematical model
Dynamical systems
Distributed control
learning control
reinforcement learning
stability guarantee
interconnected systems
Language
ISSN
2475-1456
Abstract
Distributed learning can enable scalable and effective decision making in numerous complex cyber-physical systems such as smart transportation, robotics swarm, power systems, etc. However, stability of the system is usually not guaranteed in most existing learning paradigms; and this limitation can hinder the wide deployment of machine learning in decision making of safety-critical systems. This letter presents a stability-guaranteed distributed reinforcement learning (SGDRL) framework for interconnected linear subsystems, without knowing the subsystem models. While the learning process requires data from a peer-to-peer (p2p) communication architecture, the control implementation of each subsystem is only based on its local states. The stability of the interconnected subsystems will be ensured by a diagonally dominant eigenvalue condition, which will then be used in a model-free RL algorithm to learn the stabilizing control gains. The RL algorithm structure follows an off-policy iterative framework, with interleaved policy evaluation and policy update steps. We numerically validate our theoretical results by performing simulations on four interconnected sub-systems.