학술논문

Multi-level Dispersion Residual Network for Efficient Image Super-Resolution
Document Type
Conference
Source
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) CVPRW Computer Vision and Pattern Recognition Workshops (CVPRW), 2023 IEEE/CVF Conference on. :1660-1669 Jun, 2023
Subject
Computing and Processing
Engineering Profession
Interpolation
Convolution
Computational modeling
Superresolution
Transformers
Complexity theory
Pattern recognition
Language
ISSN
2160-7516
Abstract
Recently, single image super-resolution (SISR) has made great progress, especially through the combination of convolutional neural network (CNN) and Transformer, but the huge model complexity is not desirable for the efficient image super-resolution (EISR), nor is it affordable for edge devices. As a result, many lightweight methods have been investigated for EISR, such as distillation and pruning. However, investigating more powerful attention mechanisms is also a promising solution to improve network efficiency. In this paper, we propose a multi-level dispersion residual network (MDRN) for EISR. As the basic block of MDRN, enhanced attention distillation block (EADB) includes the proposed multi-level dispersion spatial attention (MDSA) and enhanced contrast-aware channel attention (ECCA), respectively. MDSA introduces multi-scale and variance information to obtain more accurate spatial attention distribution. ECCA effectively combines lightweight convolution layers and residual connections to improve the efficiency of channel attention. The experimental results show that the proposed methods are effective and our MDRN achieves a better balance of performance and complexity than the SOTA models. In addition, we won the first place in the model complexity track of the NTIRE 2023 Efficient SR Challenge. The code is available at https://github.com/bbbolt/MDRN.