학술논문

Towards Augmentation Based Defense Strategies Against Adversarial Attacks
Document Type
Conference
Source
2023 International Conference on Machine Learning and Applications (ICMLA) ICMLA Machine Learning and Applications (ICMLA), 2023 International Conference on. :1430-1437 Dec, 2023
Subject
Computing and Processing
Engineering Profession
Robotics and Control Systems
Signal Processing and Analysis
Training
Mission critical systems
Artificial neural networks
Adversarial machine learning
adversarial machine learning
image classification
autonomous vehicles
Language
ISSN
1946-0759
Abstract
In recent years, we have observed the growing vulnerability of deep neural networks (DNNs) to adversarial attacks, challenging the forefronts of machine learning. Adver-sarial machine learning has emerged as a crucial research area to enhance the defenses of neural networks against such attacks, especially in mission-critical vision applications. The limitations and intensive resource requirements of existing defense strategies, such as adversarial training, have spawned a search for more efficient and effective defenses in an increasingly dependent, data-driven world. We propose patch-boosting, a low-cost, data augmentation-based defense that can achieve up to a 40% percent increase in accuracy against adversarial attacks. Patch-boosting not only enhances network performance against adversarial attacks but also supports low-accuracy networks in achieving more accurate predictions. This is a promising development in adversarial machine learning, offering a practical and scalable defense mechanism against adversarial attacks.