학술논문

DIPNet: Efficiency Distillation and Iterative Pruning for Image Super-Resolution
Document Type
Conference
Source
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) CVPRW Computer Vision and Pattern Recognition Workshops (CVPRW), 2023 IEEE/CVF Conference on. :1692-1701 Jun, 2023
Subject
Computing and Processing
Engineering Profession
Training
Computer vision
Conferences
Computational modeling
Superresolution
Boosting
Pattern recognition
Language
ISSN
2160-7516
Abstract
Efficient deep learning-based approaches have achieved remarkable performance in single image super-resolution. However, recent studies on efficient super-resolution have mainly focused on reducing the number of parameters and floating-point operations through various network designs. Although these methods can decrease the number of parameters and floating-point operations, they may not necessarily reduce actual running time. To address this issue, we propose a novel multi-stage lightweight network boosting method, which can enable lightweight networks to achieve outstanding performance. Specifically, we leverage enhanced high-resolution output as additional supervision to improve the learning ability of lightweight student networks. Upon convergence of the student network, we further simplify our network structure to a more lightweight level using reparameterization techniques and iterative network pruning. Meanwhile, we adopt an effective lightweight network training strategy that combines multi-anchor distillation and progressive learning, enabling the lightweight network to achieve outstanding performance. Ultimately, our proposed method achieves the fastest inference time among all participants in the NTIRE 2023 efficient super-resolution challenge while maintaining competitive super-resolution performance. Additionally, extensive experiments are conducted to demonstrate the effectiveness of the proposed components. The results show that our approach achieves comparable performance in representative dataset DIV2K, both qualitatively and quantitatively, with faster inference and fewer number of network parameters.