학술논문

FedVQCS: Federated Learning via Vector Quantized Compressed Sensing
Document Type
Periodical
Source
IEEE Transactions on Wireless Communications IEEE Trans. Wireless Commun. Wireless Communications, IEEE Transactions on. 23(3):1755-1770 Mar, 2024
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Signal Processing and Analysis
Wireless communication
Wireless sensor networks
Convergence
Analytical models
Vector quantization
Training
Image reconstruction
Federated learning
distributed learning
quantized compressed sensing
vector quantization
dimensionality reduction
Language
ISSN
1536-1276
1558-2248
Abstract
In this paper, a new communication-efficient federated learning (FL) framework is proposed, inspired by vector quantized compressed sensing. The basic strategy of the proposed framework is to compress the local model update at each device by applying dimensionality reduction followed by vector quantization. Subsequently, the global model update is reconstructed at a parameter server by applying a sparse signal recovery algorithm to the aggregation of the compressed local model updates. By harnessing the benefits of both dimensionality reduction and vector quantization, the proposed framework effectively reduces the communication overhead of local update transmissions. Both the design of the vector quantizer and the key parameters for the compression are optimized so as to minimize the reconstruction error of the global model update under the constraint of wireless link capacity. By considering the reconstruction error, the convergence rate of the proposed framework is also analyzed for a non-convex loss function. Simulation results on the MNIST and FEMNIST datasets demonstrate that the proposed framework can improve classification accuracy by more than 2.4% compared to state-of-the-art FL frameworks when the communication overhead of the local model update transmission is 0.1 bit per local model entry.