학술논문

Enforcing Privacy in Distributed Learning With Performance Guarantees
Document Type
Periodical
Source
IEEE Transactions on Signal Processing IEEE Trans. Signal Process. Signal Processing, IEEE Transactions on. 71:3385-3398 2023
Subject
Signal Processing and Analysis
Communication, Networking and Broadcast Technologies
Computing and Processing
Privacy
Optimization
Distance learning
Computer aided instruction
Topology
Signal processing algorithms
Privatization
Distributed learning
privatized learning
differential privacy
distributed optimization
Language
ISSN
1053-587X
1941-0476
Abstract
We study the privatization of distributed learning and optimization strategies. We focus on differential privacy schemes and study their effect on performance. We show that the popular additive random perturbation scheme degrades performance because it is not well-tuned to the graph structure. For this reason, we exploit two alternative graph-homomorphic constructions and show that they improve performance while guaranteeing privacy. Moreover, contrary to most earlier studies, the gradient of the risks is not assumed to be bounded (a condition that rarely holds in practice; e.g., quadratic risk). We avoid this condition and still devise a differentially private scheme with high probability. We examine optimization and learning scenarios and illustrate the theoretical findings through simulations.