학술논문

Adaptive Optimal Consensus Control of Multiagent Systems With Unknown Dynamics and Disturbances via Reinforcement Learning
Document Type
Periodical
Source
IEEE Transactions on Artificial Intelligence IEEE Trans. Artif. Intell. Artificial Intelligence, IEEE Transactions on. 5(5):2193-2203 May, 2024
Subject
Computing and Processing
Multi-agent systems
Consensus control
Optimal control
Transient analysis
Heuristic algorithms
Autonomous vehicles
Vehicle dynamics
Distributed optimal consensus control
neural networks (NNs)
prescribed performance
reinforcement learning (RL)
unmanned surface vehicles (USVs)
Language
ISSN
2691-4581
Abstract
An adaptive optimal consensus control design technique is presented for uncertain multiagent systems with prescribed performance guarantees using reinforcement learning (RL) algorithm. First, an adaptive neural network identifier is employed to learn the knowledge of uncertain system dynamics, and a disturbance observer is developed to compensate for time-varying disturbances. Second, a critic-network learning structure is established to obtain the approximate solution of Hamilton–Jacobi–Bellman (HJB) equations of multiagent systems. Then, an experience replay method is applied to update the critic network weights without requiring persistence of excitation condition. Third, RL-based optimized consensus controllers are developed such that 1) the cost function is minimized, 2) transient and steady-state performances of consensus error systems are guaranteed, and 3) uniform ultimate boundedness of the closed-loop systems is achieved. Finally, application to consensus control of unmanned surface vehicles with uncertain hydrodynamic dampings is given to demonstrate the effectiveness of the optimal control design technique.