학술논문
Robust Distillation via Untargeted and Targeted Intermediate Adversarial Samples
Document Type
Conference
Source
2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) CVPR Computer Vision and Pattern Recognition (CVPR), 2024 IEEE/CVF Conference on. :28432-28442 Jun, 2024
Subject
Language
ISSN
2575-7075
Abstract
Adversarially robust knowledge distillation aims to com-press large-scale models into lightweight models while preserving adversarial robustness and natural performance on a given dataset. Existing methods typically align probability distributions of natural and adversarial samples between teacher and student models, but they overlook intermediate adversarial samples along the “adversarial path” formed by the multi-step gradient ascent of a sample towards the decision boundary. Such paths capture rich information about the decision boundary. In this paper, we propose a novel adversarially robust knowledge distillation approach by incorporating such adversarial paths into the alignment process. Recognizing the diverse impacts of intermediate adversarial samples (ranging from benign to noisy), we propose an adaptive weighting strategy to selectively em-phasize informative adversarial samples, thus ensuring efficient utilization of lightweight model capacity. Moreover, we propose a dual-branch mechanism exploiting two following insights: (i) complementary dynamics of adversar-ial paths obtained by targeted and untargeted adversarial learning, and (ii) inherent differences between the gradient ascent path from class $c_{i}$ towards the nearest class bound-ary and the gradient descent path from a specific class $c_{j}$ towards the decision region of $c_{i}(i\neq j)$. Comprehensive experiments demonstrate the effectiveness of our method on lightweight models under various settings.