학술논문

Density-Aware U-Net for Unstructured Environment Dust Segmentation
Document Type
Periodical
Source
IEEE Sensors Journal IEEE Sensors J. Sensors Journal, IEEE. 24(6):8210-8226 Mar, 2024
Subject
Signal Processing and Analysis
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Robotics and Control Systems
Feature extraction
Image color analysis
Aerosols
Shape
Sensors
Semantic segmentation
Deep learning
Convolutional neural networks (CNNs)
dust segmentation
unstructured environments
Language
ISSN
1530-437X
1558-1748
2379-9153
Abstract
Vision-based segmentation methods rely heavily on image quality, and mining environments are full of dust, which greatly reduces visibility. Efficient and accurate segmentation of dusty regions in mining environments can improve the performance of unmanned vehicle environment perception. In this article, a dust segmentation method based on a novel density-aware nested U-structure convolutional neural network (DAUnet) is proposed. Compared with existing dust segmentation methods, our method has three advantages. First, we introduce the residual channel–spatial attention (RCSA) block. The block contains two attention layers and a residual structure, which can extract features more efficiently. Second, we introduce the difference expansion layer. This structure filters the predicted dust probabilities, eliminates pixels with lower probabilities, and then maps similar probability values to larger probability intervals, thus improving the network performance. Finally, for dust visualization, we use the predicted probabilities to reflect the dust density, which results in a smoother transition between the dust edges and the background. In addition, most of the current dust datasets are generated by simulation tools, and there is a lack of open-source real-world datasets. Therefore, we constructed the MineDust (MD) dataset based on a real open-pit mining environment. This dataset consists of dust-state images under different weather conditions and complex scenes. Experiments demonstrate that our algorithm can achieve 79.64% mIoU, which outperforms many existing segmentation methods.