학술논문

SACINet: Semantic-Aware Cross-Modal Interaction Network for Real-Time 3D Object Detection
Document Type
Periodical
Source
IEEE Transactions on Intelligent Vehicles IEEE Trans. Intell. Veh. Intelligent Vehicles, IEEE Transactions on. 9(2):3917-3927 Feb, 2024
Subject
Transportation
Robotics and Control Systems
Components, Circuits, Devices and Systems
Semantics
Feature extraction
Three-dimensional displays
Real-time systems
Point cloud compression
Task analysis
Object detection
Autonomous driving
Real-time 3D object detection
semantic occupancy perception
Cross-modal fusion
Language
ISSN
2379-8858
2379-8904
Abstract
LiDAR-Camera fusion-based 3D object detection is one of the main visual perception tasks in autonomous driving, facing the challenges of small targets and occlusions. Image semantics are beneficial for these issues, yet most existing methods applied semantics only in the cross-modal fusion stage to compensate for point geometric features, where the advantages of semantic information are not effectively explored. Further, the increased complexity of the network caused by introducing semantics is also a major obstacle to real-time. In this article, we propose a Semantic-Aware Cross-modal Interaction Network(SACINet) to achieve real-time 3D object detection, which introduces high-level semantics into both key stages of image feature extraction and cross-modal fusion. Specifically, we design a Lightweight Semantic-aware Image Feature Extractor(LSIFE) to enhance semantic samplings of objects while reducing numerous parameters. Additionally, a Semantic-Modulated Cross-modal Interaction Mechanism(SMCIM) is proposed to stress semantic details in cross-modal fusion. This mechanism conducts a pairwise interactive fusion among geometric features, semantic-aware point-wise image features, and semantic-aware point-wise segmentation features by the designed Conditions Generation Network(CGN) and Semantic-Aware Point-wise Feature Modulation(SAPFM). Ultimately, we construct a real-time(25.2fps) 3D detector with minor parameters(23.79 MB), which can better achieve the trade-off between accuracy and efficiency. Comprehensive experiments on the KITTI benchmark illustrate that SACINet is effective for real-time 3D detection, especially on small and severely occluded targets. Further, we conduct semantic occupancy perception experiments on the latest nuScenes-Occupancy benchmark, which verifies the effectiveness of SMCIM.