학술논문

LeGFusion: Locally Enhanced Global Learning for Multimodal Image Fusion
Document Type
Periodical
Source
IEEE Sensors Journal IEEE Sensors J. Sensors Journal, IEEE. 24(8):12806-12818 Apr, 2024
Subject
Signal Processing and Analysis
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Robotics and Control Systems
Image fusion
Transformers
Task analysis
Sensors
Feature extraction
Generative adversarial networks
Biomedical imaging
Locally enhanced global learning
multimodal image fusion (MMIF)
transformer
Language
ISSN
1530-437X
1558-1748
2379-9153
Abstract
Multimodal image fusion (MMIF) can provide more comprehensive scene characteristics by synthesizing a single image from multi-sensor images of the same scene, which works out the limitation of single-type hardware. To handle MMIF tasks, current deep learning (DL)-based methods usually use convolutional neural networks (CNNs) or combine transformer to extract local and global contextual information of source images. However, none of the existing works fully explores contextual information both across modalities and within single modalities, leading to limited fusion results. To this end, we propose a new MMIF method via locally enhanced global learning, termed as LeGFusion. Specifically, the network of LeGFusion is devised based on locally enhanced transformer block (LETB), which can capture long-range dependencies benefiting from nonoverlapping window-based self-attention while capturing useful local context with the utilization of the convolution operator into transformer. On one hand, several LETBs are deployed to extract global contexts from each modality while emphasizing its local information. On the other hand, the fusion module that also consists of LETBs is designed to integrate multimodal features by perceiving cross-modal local and global interactions. Powered by these intramodal and intermodal contextual information exploration, the proposed LeGFusion enjoys a high capability in capturing significant complementary information for image fusion. Extensive experiments are conducted on two types of MMIF tasks, including infrared–visible image fusion (IVF) and medical image fusion. The qualitative and quantitative evaluation results demonstrate the superiority of our LeGFusion over state-of-the-art methods. Furthermore, we validate the generalization ability of LeGFusion without fine-tuning and achieve fantastic results.