학술논문

Improved Techniques for Quantizing Deep Networks with Adaptive Bit-Widths
Document Type
Conference
Source
2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) WACV Applications of Computer Vision (WACV), 2024 IEEE/CVF Winter Conference on. :946-956 Jan, 2024
Subject
Computing and Processing
Knowledge engineering
Adaptation models
Computer vision
Adaptive systems
Quantization (signal)
Collaboration
Artificial neural networks
Algorithms
Image recognition and understanding
Language
ISSN
2642-9381
Abstract
Quantizing deep networks with adaptive bit-widths is a promising technique for efficient inference across many devices and resource constraints. In contrast to static methods that repeat the quantization process and train different models for different constraints, adaptive quantization enables us to flexibly adjust the bit-widths of a single deep network during inference for instant adaptation in different scenarios. While existing research shows encouraging results on common image classification benchmarks, this paper investigates how to train such adaptive networks more effectively. Specifically, we present two novel techniques for quantizing deep neural networks with adaptive bit-widths of weights and activations. First, we propose a collaborative strategy to choose a high-precision "teacher" for transferring knowledge to the low-precision "student" while jointly optimizing the model with all bit-widths. Second, to effectively transfer knowledge, we develop a dynamic block swapping method by randomly replacing the blocks in the lower-precision student network with the corresponding blocks in the higher-precision teacher network. Extensive experiments on multiple image and video classification datasets, well demonstrate the efficacy of our approach over state-of-the-art methods.