학술논문

Safe Multiagent Motion Planning Under Uncertainty for Drones Using Filtered Reinforcement Learning
Document Type
Periodical
Source
IEEE Transactions on Robotics IEEE Trans. Robot. Robotics, IEEE Transactions on. 40:2529-2542 2024
Subject
Robotics and Control Systems
Computing and Processing
Components, Circuits, Devices and Systems
Safety
Planning
Reinforcement learning
Dynamics
Vectors
Uncertainty
Task analysis
Collision avoidance
model predictive control (MPC)
optimization
reinforcement learning (RL)
safe learning-based control
Language
ISSN
1552-3098
1941-0468
Abstract
In this article, we consider the problem of safe multiagent motion planning for drones in uncertain, cluttered workspaces. For this problem, we present a tractable motion planner that builds upon the strengths of reinforcement learning (RL) and constrained-control-based trajectory planning. First, we use single-agent RL to learn motion plans from data that reach the target but may not be collision free. Next, we use a convex optimization, chance constraints, and set-based methods for constrained control to ensure safety, despite the uncertainty in the workspace, agent motion, and sensing. The proposed approach can handle state and control constraints on the agents, and enforce collision avoidance among themselves and with static obstacles in the workspace with high probability. The proposed approach yields a safe, real-time implementable, multiagent motion planner that is simpler to train than methods based solely on learning. Numerical simulations and experiments show the efficacy of the approach.