학술논문

Graph Reinforcement Learning for Multi-Aircraft Conflict Resolution
Document Type
Periodical
Source
IEEE Transactions on Intelligent Vehicles IEEE Trans. Intell. Veh. Intelligent Vehicles, IEEE Transactions on. 9(3):4529-4540 Mar, 2024
Subject
Transportation
Robotics and Control Systems
Components, Circuits, Devices and Systems
Aircraft
Air traffic control
Decision making
Atmospheric modeling
Intelligent vehicles
Scalability
Reinforcement learning
Conflict resolution
graph reinforcement learning
air traffic management
Language
ISSN
2379-8858
2379-8904
Abstract
The escalating density of airspace has led to sharply increased conflicts between aircraft. Efficient and scalable conflict resolution methods are crucial to mitigate collision risks. Existing learning-based methods become less effective as the scale of aircraft increases due to their redundant information representations. In this paper, to accommodate the increased airspace density, a novel graph reinforcement learning (GRL) method is presented to efficiently learn deconfliction strategies. A time-evolving conflict graph is exploited to represent the local state of individual aircraft and the global spatiotemporal relationships between them. Equipped with the conflict graph, GRL can efficiently learn deconfliction strategies by selectively aggregating aircraft state information through a multi-head attention-boosted graph neural network. Furthermore, a temporal regularization mechanism is proposed to enhance learning stability in highly dynamic environments. Comprehensive experimental studies have been conducted on an OpenAI Gym-based flight simulator. Compared with the existing state-of-the-art learning-based methods, the results demonstrate that GRL can save much training time while achieving significantly better deconfliction strategies in terms of safety and efficiency metrics. In addition, GRL has a strong power of scalability and robustness with increasing aircraft scale.