학술논문

A Privacy-Preserving Federated Learning Scheme Against Poisoning Attacks in Smart Grid
Document Type
Periodical
Source
IEEE Internet of Things Journal IEEE Internet Things J. Internet of Things Journal, IEEE. 11(9):16805-16816 May, 2024
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
Federated learning
Servers
Privacy
Training
Power generation
Data models
Computational modeling
Federated learning (FL)
model poisoning attack
privacy protection
signature authentication
Language
ISSN
2327-4662
2372-2541
Abstract
Privacy preservation in federated learning (FL) has received considerable attention and many approaches have been proposed. However, these approaches rendered the uploaded gradients invisible to the server, which poses a significant challenge in defending against poisoning attacks. In poisoning attacks, malicious or compromised participants use poisoned training data or forged local updates to disrupt the training process. It is hard for cloud servers to defend against poisoning attacks due to the invisibility of gradients. To address this issue, we propose a privacy-preserving FL scheme (PFLS) against poisoning attacks to eliminate the impact of model poisoning attacks while protecting the privacy of participants. Specifically, a dynamic adaptive defense mechanism is designed to mitigate the impact of malicious gradients and locate malicious participants. To protect participants’ privacy, a multidimensional homomorphic encryption method is constructed with a hierarchical aggregation architecture. The security analysis illustrates that the PFLS scheme can ensure the privacy of FL participants. The experimental results demonstrate that a high-detection rate of malicious participants and a balance between efficiency and robustness are achieved.