학술논문

Attention-Based Adversarial Robust Distillation in Radio Signal Classifications for Low-Power IoT Devices
Document Type
Periodical
Source
IEEE Internet of Things Journal IEEE Internet Things J. Internet of Things Journal, IEEE. 10(3):2646-2657 Feb, 2023
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
Transformers
Robustness
Modulation
Internet of Things
Training
Perturbation methods
Neural networks
Adversarial attention map
adversarial examples
adversarial training (AT)
fast gradient method (FGM)
low-power Internet of Things (IoT) devices
projected gradient descent (PGD) algorithm
transformer
Language
ISSN
2327-4662
2372-2541
Abstract
Due to great success of transformers in many applications, such as natural language processing and computer vision, transformers have been successfully applied in automatic modulation classification. We have shown that transformer-based radio signal classification is vulnerable to imperceptible and carefully crafted attacks called adversarial examples. Therefore, we propose a defense system against adversarial examples in transformer-based modulation classifications. Considering the need for computationally efficient architecture particularly for Internet of Things (IoT)-based applications or operation of devices in an environment where power supply is limited, we propose a compact transformer for modulation classification. The advantages of robust training such as adversarial training in transformers may not be attainable in compact transformers. By demonstrating this, we propose a novel compact transformer that can enhance robustness in the presence of adversarial attacks. The new method is aimed at transferring the adversarial attention map from the robustly trained large transformer to a compact transformer. The proposed method outperforms the state-of-the-art techniques for the considered white-box scenarios, including the fast gradient method and projected gradient descent attacks. We have provided reasoning of the underlying working mechanisms and investigated the transferability of the adversarial examples between different architectures. The proposed method has the potential to protect the transformer from the transferability of adversarial examples.