학술논문

A DeNoising FPN With Transformer R-CNN for Tiny Object Detection
Document Type
Periodical
Source
IEEE Transactions on Geoscience and Remote Sensing IEEE Trans. Geosci. Remote Sensing Geoscience and Remote Sensing, IEEE Transactions on. 62:1-15 2024
Subject
Geoscience
Signal Processing and Analysis
Feature extraction
Semantics
Object detection
Noise
Detectors
Transformers
Noise reduction
Aerial image
contrastive learning
noise reduction
tiny object detection
transformer-based detector
Language
ISSN
0196-2892
1558-0644
Abstract
Despite notable advancements in the field of computer vision (CV), the precise detection of tiny objects continues to pose a significant challenge, largely due to the minuscule pixel representation allocated to these objects in imagery data. This challenge resonates profoundly in the domain of geoscience and remote sensing, where high-fidelity detection of tiny objects can facilitate a myriad of applications ranging from urban planning to environmental monitoring. In this article, we propose a new framework, namely, DeNoising feature pyramid network (FPN) with Trans R-CNN (DNTR), to improve the performance of tiny object detection. DNTR consists of an easy plug-in design, DeNoising FPN (DN-FPN), and an effective Transformer-based detector, Trans region-based convolutional neural network (R-CNN). Specifically, feature fusion in the FPN is important for detecting multiscale objects. However, noisy features may be produced during the fusion process since there is no regularization between the features of different scales. Therefore, we introduce a DN-FPN module that utilizes contrastive learning to suppress noise in each level’s features in the top–down path of FPN. Second, based on the two-stage framework, we replace the obsolete R-CNN detector with a novel Trans R-CNN detector to focus on the representation of tiny objects with self-attention. The experimental results manifest that our DNTR outperforms the baselines by at least 17.4% in terms of $\text {AP}_{vt}$ on the AI-TOD dataset and 9.6% in terms of average precision (AP) on the VisDrone dataset, respectively. Our code will be available at https://github.com/hoiliu-0801/DNTR.