학술논문

Sponge Attack Against Multi-Exit Networks With Data Poisoning
Document Type
Periodical
Source
IEEE Access Access, IEEE. 12:33843-33851 2024
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Training
Data models
Computational modeling
Perturbation methods
Object detection
Optimization
Meteorology
Data integrity
Machine learning
Computer network management
Data poisoning
sponge attack
multi-exit network
machine learning
Language
ISSN
2169-3536
Abstract
The motivation for the development of multi-exit networks (MENs) lies in the desire to minimize the delay and energy consumption associated with the inference phase. Moreover, MENs are designed to expedite predictions for easily identifiable inputs by allowing them to exit the network prematurely, thereby reducing the computational burden due to challenging inputs. Nevertheless, there is a lack of comprehensive understanding regarding the security vulnerabilities inherent in MENs. In this study, we introduce a novel approach called the sponge attack, which aims to compromise the fundamental advantages of MENs that allow easily identifiable images to leave in early exits. By employing data poisoning techniques, we frame the sponge attack as an optimization problem that empowers an attacker to select a specific trigger, such as adverse weather conditions (e.g., raining), to compel inputs to traverse the complete network layers of the MEN (e.g., in the context of traffic sign recognition) instead of early-exits when the trigger condition is met. Remarkably, our attack has the capacity to increase inference latency, while maintaining the classification accuracy even in the presence of a trigger, thus operating discreetly. Extensive experimentation on three diverse natural datasets (CIFAR100, GTSRB, and STL10), each trained with three prominent MEN architectures (VGG16, ResNet56, and MSDNet), validates the efficacy of our attack in terms of latency augmentation and its effectiveness in preserving classification accuracy under trigger conditions.