학술논문

Physical Adversarial Attacks in Simulated Environments
Document Type
Conference
Source
2021 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Applied Imagery Pattern Recognition Workshop (AIPR), 2021 IEEE. :1-5 Oct, 2021
Subject
Aerospace
Bioengineering
Computing and Processing
Geoscience
Photonics and Electrooptics
Robotics and Control Systems
Signal Processing and Analysis
Training
Measurement
Image quality
Machine learning algorithms
Image color analysis
Pipelines
Lighting
machine learning
object detection
adversarial patch
simulation
Language
ISSN
2332-5615
Abstract
Adversarial attacks against machine learning algorithms are increasingly a threat as object detection, tracking, and identification systems are more frequently and widely deployed in the real world. Current research on the evaluation of adversarial physical attacks and defenses utilize open and static datasets, such as APRICOT, to provide a benchmark for comparison. Evaluating defenses in the real world can be time consuming as the collection of data in the environment of concern with enough variation such as distance, lighting, and viewing angle is quite tedious. Even so, the digital insertion of attacks (such as adver-sarial patches) has been shown to overestimate effectiveness when compared to physical insertion in the real world, often because environmental variations are not taken into consideration. This work focuses on creating a pipeline in a simulated environment to evaluate the effectiveness of adversarial patches and correct for these real world variations. In this paper, we leverage Car Learning to Act (CARLA), a hyper-realistic autonomous driving simulator, and DAPRICOT (a method to digitally insert realistic adversarial attacks into real-world scenes) to simulate and correct for realistic operational conditions.