학술논문

A Sensitivity-based Pruning Method for Convolutional Neural Networks
Document Type
Conference
Source
2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC) Systems, Man, and Cybernetics (SMC), 2022 IEEE International Conference on. :1032-1038 Oct, 2022
Subject
Bioengineering
Components, Circuits, Devices and Systems
Computing and Processing
General Topics for Engineers
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Degradation
Sensitivity
Stochastic processes
Convolutional neural networks
Cybernetics
Convolutional Neural Networks
Pruning
Stochastic Sensitivity Measure
Compensation Operation
Language
ISSN
2577-1655
Abstract
The application of convolutional neural networks (CNNs) is sometimes limited by a large number of parameters and floating-point operations. Pruning methods have been proved to be effective to solve this problem. These methods improve the efficiency and storage occupancy of CNNs by removing weights connected with certain neurons/channels. The key issue is the selection of suitable neurons/channels to be pruned. Then, fine-tuning is usually applied to restore the performance of a pruned model to that before the pruning. However, existing neurons/channels selection methods do not explicitly consider the impact of the pruning on the model output. Moreover, the performance of a fine-tuned model may suffer from the information loss problem caused by the pruned neurons/channels. In this work, a stochastic sensitivity measure-based neurons/channels selection criterion is proposed to choose and prune insensitive neurons/channels, which effectively reduces the degradation of model performance. Moreover, a compensation operation followed by fine-tuning is proposed to relieve the information loss problem and restore model performance. Experimental results show that our method yields comparable compression and acceleration rates with less accuracy degradation compared with existing pruning methods for CNNs. For instance, the proposed method achieves more 6.8% FLOPs reduction and 0.25% accuracy improvement on VGG-16 compared with a recently proposed pruning method.