학술논문

Efficient Passive Sensing Monocular Relative Depth Estimation
Document Type
Conference
Source
2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Applied Imagery Pattern Recognition Workshop (AIPR), 2019 IEEE. :1-9 Oct, 2019
Subject
Aerospace
Bioengineering
Computing and Processing
Geoscience
Photonics and Electrooptics
Robotics and Control Systems
Signal Processing and Analysis
Depth estimation
deep convolutional neural networks
superpixel feature extraction.
Language
ISSN
2332-5615
Abstract
We propose a method to perform monocular relative depth perception using a passive visual sensor. Specifically, the proposed method makes depth estimation with a superpixel based regression model based on features extracted by a deep convolutional neural network. We have established and conducted an analysis of the key components required to create a high-efficiency pipeline to solve the depth estimation problem with superpixel-level regression and deep learning. The key contributions of our method compared to prior works are as follows. First, we have drastically simplified the depth estimation model while attaining near state-of-the-art prediction performance, through two important optimizations: the idea of the depth estimation model is completely based on superpixels that very effectively reduces the dimensionality; additionally, we exploited the scale invariant mean squared error loss function which incorporates a pairwise term with linear time complexity. Additionally, we have developed optimizations of the superpixel feature extraction, that leverage GPU computing to achieve real-time performance (over 50fps during training) Furthermore, this model does not perform up-sampling, which avoids many issues and difficulties that one would otherwise have to deal with. To perpetuate future research in this area we have created a synchronized multiple-view depth estimation training dataset that is available to the public.