학술논문

MBSI-Net: Multimodal Balanced Self-Learning Interaction Network for Image Classification
Document Type
Periodical
Source
IEEE Transactions on Circuits and Systems for Video Technology IEEE Trans. Circuits Syst. Video Technol. Circuits and Systems for Video Technology, IEEE Transactions on. 34(5):3819-3833 May, 2024
Subject
Components, Circuits, Devices and Systems
Communication, Networking and Broadcast Technologies
Computing and Processing
Signal Processing and Analysis
Remote sensing
Feature extraction
Spatial resolution
Training
Satellites
Image classification
Knowledge engineering
multimodal
remote sensing
transfer learning
Language
ISSN
1051-8215
1558-2205
Abstract
A growing number of earth observation satellites are able to simultaneously gather multimodal images of the same area due to the expanding availability and resolution of satellite remote sensing data. This paper proposes a novel multimodal balanced self-learning interaction network (MBSI-Net) for the classification task. It involves a dual-branch teacher-student network that enables knowledge interaction and transfer between the multimodalities. Firstly, in order to introduce statistical information in addition to local and global structural information, a texture feature equalization module (TFE-Module) is proposed. This can enhance the texture information of features through histogram equalization and further improve the representation ability of features. Secondly, to enable the student network to provide timely feedback questions, the paper proposes a feature fusion module (F2-Module) that models and enhances teacher features through the student network. This helps to raise the classification’s accuracy by incorporating information from multimodal images. Finally, the paper proposes a loss function based on structural similarity analysis to ensure balanced self-learning between the student and the teacher networks. Taking the multispectral (MS) and the panchromatic (PAN) images of the same scene as examples, through experimental verification, the proposed method can achieve good results on multiple datasets compared with other methods. Therefore, it offers an effective method for classifying and fusing multimodal data.