학술논문

Self-supervised Monocular Depth Estimation in Challenging Environments Based on Illumination Compensation PoseNet
Document Type
Conference
Source
2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Intelligent Robots and Systems (IROS), 2024 IEEE/RSJ International Conference on. :9396-9403 Oct, 2024
Subject
Robotics and Control Systems
Three-dimensional displays
Autonomous systems
Depth measurement
Snow
Lighting
Transformers
Reflection
Photometry
Intelligent robots
Language
ISSN
2153-0866
Abstract
Self-supervised depth estimation has attracted much attention due to its ability to improve the 3D perception capabilities of unmanned systems. However, existing unsupervised frameworks rely on the assumption of photometric consistency, which may not hold in challenging environments such as night-time, rainy nights, or snowy winters due to complex lighting and reflections, resulting in inconsistent photometry across different frames for the same pixel. To address this problem, we propose a self-supervised monocular depth estimation unified framework that can handle these complex scenarios, which has the following characteristics: (1) an Illumination Compensation PoseNet (ICP) is designed, which is based on the classic Phong illumination theory and compensates for lighting changes in adjacent frames by estimating per-pixel transformations; (2) a Dual-Axis Transformer (DAT) block is proposed as the backbone network of the depth encoder, which infers the depth of local repeat-texture areas through spatial-channel dual-dimensional global context information of images. Experimental results demonstrate that our approach achieves state-of-the-art depth estimation results in complex environments on the challenging Oxford RobotCar dataset.