학술논문

You Look Twice: GaterNet for Dynamic Filter Selection in CNNs
Document Type
Conference
Source
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Computer Vision and Pattern Recognition (CVPR), 2019 IEEE/CVF Conference on. :9164-9172 Jun, 2019
Subject
Computing and Processing
Deep Learning
Recognition: Detection
Categorization
Retrieval
Language
ISSN
2575-7075
Abstract
The concept of conditional computation for deep nets has been proposed previously to improve model performance by selectively using only parts of the model conditioned on the sample it is processing. In this paper, we investigate input-dependent dynamic filter selection in deep convolutional neural networks (CNNs). The problem is interesting because the idea of forcing different parts of the model to learn from different types of samples may help us acquire better filters in CNNs, improve the model generalization performance and potentially increase the interpretability of model behavior. We propose a novel yet simple framework called GaterNet, which involves a backbone and a gater network. The backbone network is a regular CNN that performs the major computation needed for making a prediction, while a global gater network is introduced to generate binary gates for selectively activating filters in the backbone network based on each input. Extensive experiments on CIFAR and ImageNet datasets show that our models consistently outperform the original models with a large margin. On CIFAR-10, our model also improves upon state-of-the-art results.