학술논문
Federated Learning with Partial Gradients Over-the-Air
Document Type
Electronic Resource
Source
Subject
Language
Abstract
We develop a theoretical framework to study the training of federated learning models with partial gradients via over-the-air computing. The system consists of an edge server and multiple clients, aiming to collaboratively minimize a global loss function. The clients conduct local training and upload the intermediate parameters (e.g. the gradients) by analog transmissions. Specifically, each client modulates the entries of its local gradient onto a set of common orthogonal waveforms and sends out the signal simultaneously to the edge server; owing to the limited number of orthogonal waveforms, only a subset of the parameters can be selected for uploading during each round of communication. On the server side, it passes the received analog signal to a bank of match filters and obtains a noisy partial gradient vector. The server then uses this partial gradient to update the global parameter and feeds the new model back to all the clients for another round of local training. We derive the convergence rate of such a model training algorithm. We also conduct experiments to investigate the effects of different masking schemes on the convergence performance. The findings advance the understanding of over-the-air federated learning and provide useful insights for system designs.
Funding Agencies|National Natural Science Foundation of China [62271513]
Funding Agencies|National Natural Science Foundation of China [62271513]