학술논문

AFCANet: An adaptive feature concatenate attention network for multi-focus image fusion
Document Type
article
Source
Journal of King Saud University: Computer and Information Sciences, Vol 35, Iss 9, Pp 101751- (2023)
Subject
Multi-focus image fusion
Unsupervised training
Adaptive feature concatenate
Attention module
Electronic computers. Computer science
QA75.5-76.95
Language
English
ISSN
1319-1578
Abstract
For multi-focus image fusion, the existing deep learning based methods cannot effectively learn the texture features and semantic information of the source image to generate high-quality fused images. Thus, we develop a new adaptive feature concatenate attention network named AFCANet, which adaptively learns cross-layer features and retains the texture features and semantic information of images to generate visually appealing fully focused images. In AFCANet, the encoder-decoder network is used as the backbone network. In the unsupervised training stage, an adaptive cross-layer skip connection mode is designed, and a cross-layer adaptive coordinate attention module is built to acquire meaningful information from the image along with ignoring unimportant information to obtain a better image fusion effect. In addition, in the middle of the encoder-decoder network, we also introduce an effective channel attention module to fully learn the output of the encoder, and accelerate network convergence. In the inference stage, we apply the pixel-based spatial frequency fusion rules to fuse the adaptive features learned by the encoder, which can successfully combine the texture and semantic information of the image and produce a more precise decision map. Extensive experiments on public datasets and the HBU-CVMDSP dataset show that our AFCANet can effectively improve the accuracy of the decision map in the focus and defocus regions, as well as improve the ability to retain the abundant details and edge features of the source image.