학술논문

Reciprocal Teacher-Student Learning via Forward and Feedback Knowledge Distillation
Document Type
Periodical
Source
IEEE Transactions on Multimedia IEEE Trans. Multimedia Multimedia, IEEE Transactions on. 26:7901-7916 2024
Subject
Components, Circuits, Devices and Systems
Communication, Networking and Broadcast Technologies
Computing and Processing
General Topics for Engineers
Knowledge engineering
Training
Visualization
Computational modeling
Reviews
Knowledge transfer
Correlation
Model compression
knowledge distillation
feedback knowledge
visual recognition
Language
ISSN
1520-9210
1941-0077
Abstract
Knowledge distillation (KD) is a prevalent model compression technique in deep learning, aiming to leverage knowledge from a large teacher model to enhance the training of a smaller student model. It has found success in deploying compact deep models in intelligent applications like intelligent transportation, smart health, and distributed intelligence. Current knowledge distillation methods primarily fall into two categories: offline and online knowledge distillation. Offline methods involve a one-way distillation process, transferring unvaried knowledge from teacher to student, while online methods enable the simultaneous training of multiple peer students. However, existing knowledge distillation methods often face challenges where the student may not fully comprehend the teacher's knowledge due to model capacity gaps, and there might be knowledge incongruence among outputs of multiple students without teacher guidance. To address these issues, we propose a novel reciprocal teacher-student learning inspired by human teaching and examining through forward and feedback knowledge distillation (FFKD). Forward knowledge distillation operates offline, while feedback knowledge distillation follows an online scheme. The rationale is that feedback knowledge distillation enables the pre-trained teacher model to receive feedback from students, allowing the teacher to refine its teaching strategies accordingly. To achieve this, we introduce a new weighting constraint to gauge the extent of students' understanding of the teacher's knowledge, which is then utilized to enhance teaching strategies. Experimental results on five visual recognition datasets demonstrate that the proposed FFKD outperforms current state-of-the-art knowledge distillation methods.