학술논문

More Communication Does Not Result in Smaller Generalization Error in Federated Learning
Document Type
Conference
Source
2023 IEEE International Symposium on Information Theory (ISIT) Information Theory (ISIT), 2023 IEEE International Symposium on. :48-53 Jun, 2023
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Signal Processing and Analysis
Upper bound
Federated learning
Statistical learning
Sociology
Stochastic processes
Servers
Statistics
Language
ISSN
2157-8117
Abstract
We study the generalization error of statistical learning models in a Federated Learning (FL) setting. Specifically, there are K devices or clients, each holding an independent own dataset of size n. Individual models, learned locally via Stochastic Gradient Descent, are aggregated (averaged) by a central server into a global model and then sent back to the devices. We consider multiple (say $ \in {{\mathbb{N}}^{\text{*}}}$ ) rounds of model aggregation and study the effect of R on the generalization error of the final aggregated model. We establish an upper bound on the generalization error that accounts explicitly for the effect of R (in addition to the number of participating devices K and dataset size ). It is observed that, for fixed $\left({n,K}\right)$, the bound increases with R, suggesting that the generalization of such learning algorithms is negatively affected by more frequent communication with the parameter server. Combined with the fact that the empirical risk, however, generally decreases for larger values of R, this indicates that R might be a parameter to optimize to reduce the population risk of FL algorithms. The results of this paper, which extend straightforwardly to the heterogeneous (non-i.i.d.) data setting, are also illustrated through numerical examples.