학술논문

Deceiving Post-Hoc Explainable AI (XAI) Methods in Network Intrusion Detection
Document Type
Conference
Source
2024 IEEE 21st Consumer Communications & Networking Conference (CCNC) Consumer Communications & Networking Conference (CCNC), 2024 IEEE 21st. :107-112 Jan, 2024
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Robotics and Control Systems
Training
Regulators
Explainable AI
5G mobile communication
Perturbation methods
Network security
Feature extraction
Explainable security
5G
B5G
Network Intrusion Detection
Machine Learning
Scaffolding Attack
Future networks
Intent-based networks
Language
ISSN
2331-9860
Abstract
Artificial Intelligence used in future networks is vulnerable to biases, misclassifications, and security threats, which seeds constant scrutiny in accountability. Explainable AI (XAI) methods bridge this gap in identifying unaccounted biases in black-box AI/ML models. However, scaffolding attacks can hide the internal biases of the model from XAI methods, jeopardizing any auditory or monitoring processes, service provisions, security systems, regulators, auditors, and end-users in future networking paradigms, including Intent-Based Networking (IBN). For the first time ever, we formalize and demonstrate a framework on how an attacker would adopt scaffoldings to deceive the security auditors in Network Intrusion Detection Systems (NIDS). Furthermore, we propose a detection method that auditors can use to detect the attack efficiently. We rigorously test the attack and detection methods using the NSL-KDD. We then simulate the attack on 5G network data. Our simulation illustrates that the attack adoption method is successful, and the detection method can identify an affected model with extremely high confidence.