학술논문

Shield Against Gradient Leakage Attacks: Adaptive Privacy-Preserving Federated Learning
Document Type
Periodical
Source
IEEE/ACM Transactions on Networking IEEE/ACM Trans. Networking Networking, IEEE/ACM Transactions on. 32(2):1407-1422 Apr, 2024
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Signal Processing and Analysis
Privacy
Training
Adaptation models
Federated learning
Convergence
Servers
Differential privacy
data privacy
gradient leakage attack
differential privacy
Language
ISSN
1063-6692
1558-2566
Abstract
Federated learning (FL) requires frequent uploading and updating of model parameters, which is naturally vulnerable to gradient leakage attacks (GLAs) that reconstruct private training data through gradients. Although some works incorporate differential privacy (DP) into FL to mitigate such privacy issues, their performance is not satisfactory since they did not notice that GLA incurs heterogeneous risks of privacy leakage (RoPL) with respect to gradients from different communication rounds and clients. In this paper, we propose an Adaptive Privacy-Preserving Federated Learning (Adp-PPFL) framework to achieve satisfactory privacy protection against GLA, while ensuring good performance in terms of model accuracy and convergence speed. Specifically, a leakage risk-aware privacy decomposition mechanism is proposed to provide adaptive privacy protection to different communication rounds and clients by dynamically allocating the privacy budget according to the quantified RoPL. In particular, we exploratively design a round-level and a client-level RoPL quantification method to measure the possible risks of GLA breaking privacy from gradients in different communication rounds and clients respectively, which only employ the limited information in general FL settings. Furthermore, to improve the FL model training performance (i.e., convergence speed and global model accuracy), we propose an adaptive privacy-preserving local training mechanism that dynamically clips the gradients and decays the noises added to the clipped gradients during the local training process. Extensive experiments show that our framework outperforms the existing differentially private FL schemes on model accuracy, convergence, and attack resistance.