학술논문

A Robust LiDAR-Camera Self-Calibration Via Rotation-Based Alignment and Multi-Level Cost Volume
Document Type
Periodical
Source
IEEE Robotics and Automation Letters IEEE Robot. Autom. Lett. Robotics and Automation Letters, IEEE. 9(1):627-634 Jan, 2024
Subject
Robotics and Control Systems
Computing and Processing
Components, Circuits, Devices and Systems
Laser radar
Feature extraction
Calibration
Transformers
Point cloud compression
Cameras
Task analysis
sensor fusion
deep learning
Language
ISSN
2377-3766
2377-3774
Abstract
Multi-sensor collaborative perception has been a significant trend in self-driving and robot navigation. The precondition for multi-sensor fusion is the accurate calibration between sensors. Traditional LiDAR-Camera calibrations rely on laborious manual operations. Several recent studies have demonstrated the advantages of convolutional neural networks regarding feature extraction capabilities. However, the vast modality discrepancy between RGB images and point clouds makes it difficult to explore corresponding features, remaining a challenge for LiDAR-Camera calibrations. In this letter, we propose a new robust online LiDAR-Camera self-calibration network (SCNet). To reduce the search dimensionality for feature matching, we exploit self-supervised learning to align RGB images with projected depth images in 2D pixel coordinates, thereby achieving pre-alignment of the roll angle. In addition, to generate more accurate initial similarity measures for RGB image pixels and possible corresponding projected depth image pixels, we propose a novel multi-level patch matching method that concatenates cost volume constructed from multi-level feature maps. Our method achieves a mean absolute calibration error of 0.724 cm in translation and 0.055$^{\circ }$ in rotation in a single frame analysis with miscalibration magnitudes of up to $\pm$1.5 m and $\pm 20^{\circ }$ on the KITTI odometry dataset, which demonstrates the superiority of our method.