학술논문

FAKD: Feature Augmented Knowledge Distillation for Semantic Segmentation
Document Type
Conference
Source
2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) WACV Applications of Computer Vision (WACV), 2024 IEEE/CVF Winter Conference on. :584-594 Jan, 2024
Subject
Computing and Processing
Training
Knowledge engineering
Computer vision
Upper bound
Codes
Semantic segmentation
Semantics
Algorithms
Image recognition and understanding
Language
ISSN
2642-9381
Abstract
In this work, we explore data augmentations for knowledge distillation on semantic segmentation. Due the capacity gap, small-sized student networks struggle to discover the discriminative feature space learned by a powerful teacher. Image-level augmentations allow the student to better imitate the teacher by providing extra outputs. However, existing distillation frameworks only augment a limited number of samples, which restricts the learning of a student. Inspired by the recent progress on semantic directions on feature space, this work proposes a feature-level augmented knowledge distillation (FAKD) which infinitely augments features along a semantic direction for optimal knowledge transfer. Furthermore, we introduce novel surrogate loss functions to distill the teacher’s knowledge from an infinite number of samples. The surrogate loss is an upper bound of the expected distillation loss over infinite augmented samples. Extensive experiments on four semantic segmentation benchmarks demonstrate that the proposed method boosts the performance of current knowledge distillation methods without any significant overhead. The code will be released at FAKD.