학술논문

Blockchain-Based Gradient Inversion and Poisoning Defense for Federated Learning
Document Type
Periodical
Source
IEEE Internet of Things Journal IEEE Internet Things J. Internet of Things Journal, IEEE. 11(9):15667-15681 May, 2024
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
Blockchains
Federated learning
Security
Data models
Internet of Things
Training
Data privacy
Blockchain
federated learning (FL)
gradient inversion attack (GIA)
Internet of Things (IoT)
Poison attack (PA)
privacy
security
Language
ISSN
2327-4662
2372-2541
Abstract
Federated learning (FL) FL has emerged as a promising privacy-preserving machine-learning technology, enabling multiple clients to collaboratively train a global model without sharing raw data. With the increasing adoption of FL in Internet of Things (IoT) scenarios, concerns about security and privacy have become critical. In particular, gradient inversion attacks and poisoning attacks pose significant threats to the integrity and effectiveness of the global model. In response, we propose a comprehensive blockchain-based defense mechanism that effectively protects FL systems from such attacks. We develop a novel combination of techniques, including public blockchain level protection and private blockchain level protection, which work in tandem to prevent attackers from reconstructing figures using the obtained gradients. This unique combination of methods provides a robust defense against gradient inversion attacks in FL IoT scenarios. We conduct extensive experiments to validate the effectiveness of our proposed approach against gradient inversion and poisoning attacks. Our results demonstrate improved accuracy and stable convergence of training loss under poisoning attacks, indicating that our method can be applied to a wide range of FL IoT scenarios, enhancing both the security and privacy of distributed machine-learning systems.