학술논문

Volumetric Propagation Network: Stereo-LiDAR Fusion for Long-Range Depth Estimation
Document Type
Periodical
Source
IEEE Robotics and Automation Letters IEEE Robot. Autom. Lett. Robotics and Automation Letters, IEEE. 6(3):4672-4679 Jul, 2021
Subject
Robotics and Control Systems
Computing and Processing
Components, Circuits, Devices and Systems
Three-dimensional displays
Feature extraction
Cameras
Estimation
Laser radar
Robot sensing systems
Two dimensional displays
Autonomous driving
depth estimation
sensor fusion
stereo -LiDAR fusion
Language
ISSN
2377-3766
2377-3774
Abstract
Stereo-LiDAR fusion is a promising task in that we can utilize two different types of 3D perceptions for practical usage – dense 3D information (stereo cameras) and highly-accurate sparse point clouds (LiDAR). However, due to their different modalities and structures, the method of aligning sensor data is the key for successful sensor fusion. To this end, we propose a geometry-aware stereo-LiDAR fusion network for long-range depth estimation, called volumetric propagation network . The key idea of our network is to exploit sparse and accurate point clouds as a cue for guiding correspondences of stereo images in a unified 3D volume space. Unlike existing fusion strategies, we directly embed point clouds into the volume, which enables us to propagate valid information into nearby voxels in the volume, and to reduce the uncertainty of correspondences. Thus, it allows us to fuse two different input modalities seamlessly and regress a long-range depth map. Our fusion is further enhanced by a newly proposed feature extraction layer for point clouds guided by images: FusionConv . FusionConv extracts point cloud features that consider both semantic (2D image domain) and geometric (3D domain) relations and aid fusion at the volume. Our network achieves state-of-the-art performance on KITTI and Virtual-KITTI datasets among recent stereo-LiDAR fusion methods.