학술논문

Static Multitarget-Based Autocalibration of RGB Cameras, 3-D Radar, and 3-D Lidar Sensors
Document Type
Periodical
Source
IEEE Sensors Journal IEEE Sensors J. Sensors Journal, IEEE. 23(18):21493-21505 Sep, 2023
Subject
Signal Processing and Analysis
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Robotics and Control Systems
Calibration
Sensors
Radar
Laser radar
Cameras
Intelligent sensors
Three-dimensional displays
Autonomous vehicles
camera
feature extraction
intelligent roadside infrastructure
light detection and ranging (lidar)
radio detection and ranging (radar)
sensor calibration
Language
ISSN
1530-437X
1558-1748
2379-9153
Abstract
For environmental perception, autonomous vehicles and intelligent roadside infrastructure systems contain multiple sensors, that is, radio detection and ranging (radar), light detection and ranging (lidar), and camera sensors with the aim to detect, classify, and track multiple road users. Data from multiple sensors are fused together to enhance the perception quality of the sensor system because each sensor has strengths and weaknesses, for example, resolution, distance measurement, and dependency on weather conditions. For data fusion, it is necessary to transform the data from the different sensor coordinates to a common coordinate frame. This process is referred to as multisensor calibration and is a challenging task, which is mostly performed manually. This article introduces a new method for autocalibrating 3-D radar, 3-D lidar, and red-green-blue (RGB) mono-camera sensors using a static multitarget-based system. The proposed method can be used with sensors operating at different frame rates without time synchronization. Furthermore, the described static multitarget system is cost-effective, easy to build, and applicable for short- to long-distance calibration. The experimental results for multiple sets of measurements show good results with projection errors measured as maximum root mean square error (RMSE) of ${(}{u},{v}{)}$ = (2.4, 1.8) pixels for lidar-to-camera calibration, RMSE of ${(}{u},{v}{)}$ = (2.2, 3.0) pixels for 3-D radar-to-camera calibration, and RMSE of ${(}{x},{y},{z}{)}$ = (2.6, 2.7, 14.0) centimeters for 3-D radar-to-lidar calibration.