학술논문

RIANet++: Road Graph and Image Attention Networks for Robust Urban Autonomous Driving Under Road Changes
Document Type
Periodical
Source
IEEE Robotics and Automation Letters IEEE Robot. Autom. Lett. Robotics and Automation Letters, IEEE. 8(11):7815-7822 Nov, 2023
Subject
Robotics and Control Systems
Computing and Processing
Components, Circuits, Devices and Systems
Roads
Autonomous vehicles
Robot sensing systems
Semantics
Cameras
Visualization
Vehicles
Autonomous vehicle navigation
imitation learning
sensor fusion
vision-based navigation
Language
ISSN
2377-3766
2377-3774
Abstract
The structure of roads plays an important role in designing autonomous driving algorithms. We propose a novel road graph based driving framework, named RIANet++. The proposed framework considers the road structural scene context by incorporating both graphical features of the road and visual information through the attention mechanism. Also, the proposed framework can deal with the performance degradation problem, caused by road changes and corresponding road graph data unreliability. For this purpose, we suggest a road change detection module which can filter out unreliable road graph data by evaluating the similarity between the camera image and the query road graph. In this letter, we suggest two types of detection methods, semantic matching and graph matching. The semantic matching (resp., graph matching) method computes the similarity score by transforming the road graph data (resp., camera data) into the semantic image domain (resp., road graph domain). In experiments, we test the proposed method in two driving environments: the CARLA simulator and the FMTC real-world environment. The experiment results demonstrate that the proposed driving framework outperforms other baselines and operates robustly under road changes.