학술논문

Local Aggregative Attack on SAR Image Classification Models
Document Type
Conference
Source
2022 IEEE 6th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC ) Advanced Information Technology, Electronic and Automation Control Conference (IAEAC ), 2022 IEEE 6th. :1519-1524 Oct, 2022
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Engineering Profession
Robotics and Control Systems
Signal Processing and Analysis
Measurement
Image segmentation
Perturbation methods
Cost function
Distortion
Radar polarimetry
Convolutional neural networks
Convolutional neural network (CNN)
synthetic aperture radar (SAR)
image classification
adversarial example
OTSU
differential evolution (DE)
structural dissimilarity (DSSIM)
Language
ISSN
2689-6621
Abstract
Convolutional neural networks (CNN) have been widely used in the field of synthetic aperture radar (SAR) image classification for their high classification accuracy. However, since CNNs learn a fairly discontinuous input-output mapping, they are vulnerable to adversarial examples. Unlike most existing attack manners that fool CNN models with complex global perturbations, this study provides an idea for generating more dexterous adversarial perturbations. It demonstrates that minor local perturbations are also effective for attacking. We propose a new attack method called local aggregative attack (LAA), which is a black-box method based on probability label information, to reduce the range and amplitude of adversarial perturbations. Our attack introduces the differential evolution (DE) algorithm to search for the optimal perturbations and applies the maximum between-class variance method (OTSU) to accomplish pixel-level labeling of the target and background areas, enabling attackers to generate adversarial examples of SAR images (AESIs) by adding small-scale perturbations to specific areas. Meanwhile, the structural dissimilarity (DSSIM) metric optimizes the cost function to limit image distortion and improve attack stealthiness. Experiments show that our method achieves a high attack success rate against these CNN-based classifiers, and the generated AESIs are equipped with non-negligible transferability between different models.