학술논문

Collusive Backdoor Attacks in Federated Learning Frameworks for IoT Systems
Document Type
Periodical
Source
IEEE Internet of Things Journal IEEE Internet Things J. Internet of Things Journal, IEEE. 11(11):19694-19707 Jun, 2024
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
Perturbation methods
Vectors
Estimation
Training
Internet of Things
Data models
Federated learning
Backdoor attacks
collusion
deep learning (DL)
federated learning (FL)
Internet of Things (IoT)
Language
ISSN
2327-4662
2372-2541
Abstract
Internet of Things (IoT) devices generate massive amounts of data from local devices, making federated learning (FL) a viable distributed machine learning paradigm to learn a global model while keeping private data locally in various IoT systems. However, recent studies show that FL’s decentralized nature makes it susceptible to backdoor attacks. Existing defenses like robust aggregation defenses have reduced attack success rates (ASRs) by identifying significant statistical differences between normal and backdoored models individually. However, these defenses fail to consider the potential collusion among attackers to bypass statistical measures utilized in defenses. In this article, we propose a novel attack approach, called collusive backdoor attacks (CBAs), which bypasses robust aggregation defense by considering both local backdoor training and post-training model manipulations among collusive attackers. Particularly, we introduce a nontrivial perturbation estimation scheme to add manipulations over model update vectors after local backdoor training and use the Gram-Schmidt process to speed up the estimation process. This makes the magnitude of the perturbed poisoned model to the same level as normal models, evading robust aggregation-based defense while maintaining attack efficacy. After that, we provide a pilot study to verify the feasibility of our perturbation estimation scheme, followed by its convergence analysis. By evaluating the attack performance on four representative data sets, our CBA approach maintains high ASRs under benchmark robust aggregation defenses in both independent and identically distributed (IID) and non-IID local data settings. Particularly, it increases the ASR by 126% on average compared to individual backdoor attacks.