KOR

e-Article

Radar and Lidar Deep Fusion: Providing Doppler Contexts to Time-of-Flight Lidar
Document Type
Periodical
Source
IEEE Sensors Journal IEEE Sensors J. Sensors Journal, IEEE. 23(20):25587-25600 Oct, 2023
Subject
Signal Processing and Analysis
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Robotics and Control Systems
Radar
Laser radar
Doppler radar
Doppler effect
Radar detection
Sensors
Radar imaging
Convolutional neural network (CNN)
deep learning
object detection
sensor fusion
Language
ISSN
1530-437X
1558-1748
2379-9153
Abstract
This work proposes a novel sensor fusion-based, single-frame, multiclass object detection method for road users, including vehicles, pedestrians, and cyclists, in which a deep fusion occurs between the lidar point cloud (PC) and the corresponding Doppler contexts, namely, the Doppler features, from the radar cube. Based on convolutional neural networks (CNNs), the method consists of two stages: In the first stage, region proposals are generated from the voxelized lidar PC, and relying on these proposals, Doppler contexts are cropped from the radar cube. In the second stage, using fused features from the lidar and radar, the method achieves object detection and object motion status classification tasks. When evaluated with measurements in inclement conditions, which are generated by a foggification model from real-life measurements, in terms of the intersection over union (IoU) metric, the proposed method outperforms the lidar-based network by a large margin for vulnerable road users, namely, 4.5% and 6.1% improvement for pedestrians and cyclists, respectively. In addition, it achieves 87% ${F}_{{1}}$ score (81.6% precision and 93.1% recall) for single-frame, object motion status classification.