학술논문

Multi-Modal Place Recognition via Vectorized HD Maps and Images Fusion for Autonomous Driving
Document Type
Periodical
Source
IEEE Robotics and Automation Letters IEEE Robot. Autom. Lett. Robotics and Automation Letters, IEEE. 9(5):4710-4717 May, 2024
Subject
Robotics and Control Systems
Computing and Processing
Components, Circuits, Devices and Systems
Visualization
Semantics
Feature extraction
Autonomous vehicles
Image recognition
Location awareness
Roads
Autonomous vehicle navigation
localization
deep learning for visual perception
Language
ISSN
2377-3766
2377-3774
Abstract
The deployment of autonomous vehicles and mobile robots requires light, fast, and robust visual place recognition strategies. While visual place recognition has proven effective in favorable conditions, its performance quickly drops when faced with abundant visual cues, such as repeating image patterns commonly found in driving environments. To address this problem, a new representation that incorporates geometric cues with structural semantics can also be utilized to find the position of an agent to distribute the reliance on visual cues. In this letter, we present the first multi-modal place recognition for autonomous driving that utilizes both images and vectorized HD maps. The vectorized HD maps have the advantage of being lightweight and providing geometric cues with structural semantics, making them particularly well-suited for place recognition. To accomplish this, we employ a hierarchical graph neural network to extract a compact and robust descriptor from a local vectorized map that can be captured from surrounding images. Although HD maps provide concise geometric cues with structural semantics, they sometimes do not provide sufficient features for place recognition, contrary to images. To cope with this limitation, we propose to adaptively fuse both descriptors extracted from maps and images in order to combine the best complementary aspects of each modality via a transformer-based solution. Extensive experiments on large-scale driving datasets, NuScenes and Argoverse2, demonstrate that our multi-modal visual localization outperforms visual-only approaches. Specifically, ours improves the baseline up to 6.48%p in Recall@1 with less than 10 ms additional computation.