학술논문

HierNet: Hierarchical Transformer U-Shape Network for RGB-D Salient Object Detection
Document Type
Conference
Source
2023 35th Chinese Control and Decision Conference (CCDC) Control and Decision Conference (CCDC), 2023 35th Chinese. :1807-1811 May, 2023
Subject
General Topics for Engineers
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Cross layer design
Convolution
Object detection
Coherence
Transformers
Feature extraction
Sensors
salient object detection
RGB-D
transformer
self-attention
Language
ISSN
1948-9447
Abstract
With the popularity of depth sensors, research on RGB-D salient object detection (SOD) is also thriving. However, given the limitations of the external environment and the sensor itself, depth information is often less credible. To meet this challenge, existing models often purify the depth information using complex convolution and pooling operations. This causes a large amount of useful information besides noise to be dropped as well, and multi-modality interaction chances between RGB and depth become less. Also, with the gradual loss of information, the hidden relationship of features between multi-level is thus ignored. To tackle the aforementioned problems, we propose a Hierarchical Transformer U-Shape Network (HierNet) that include three aspects: 1) With a simple structure, a depth calibration module provides faithful depth information with minimal loss of information, providing conditions for cross-modality cross-layer information interaction; 2) With multi-head attention, a set of global view-based transformer encoders are employed to find the potential coherence between RGB and depth modalities. With weight sharing, several transformer encoder sets comprise the hierarchical transformer embedding module to search long-range dependencies cross-level; 3) Considering the complementary features of U-shape network, we use dual-stream U-shape network as our backbone. Extensive fair experiments on four challenging datasets have demonstrated the outstanding performance of the proposed model compared to state-of-the-art models.