학술논문

Performance Optimization for Variable Bitwidth Federated Learning in Wireless Networks
Document Type
Periodical
Source
IEEE Transactions on Wireless Communications IEEE Trans. Wireless Commun. Wireless Communications, IEEE Transactions on. 23(3):2340-2356 Mar, 2024
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Signal Processing and Analysis
Training
Quantization (signal)
Servers
Mathematical models
Computational modeling
Performance evaluation
Data models
Bitwidth federated learning (FL)
FL training loss optimization
model-based reinforcement learning (RL)
Language
ISSN
1536-1276
1558-2248
Abstract
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization. In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which, in turn, aggregates them into a quantized global model and synchronizes the devices. The goal is to jointly determine the bitwidths employed for local FL model quantization and the set of devices participating in FL training at each iteration. We pose this as an optimization problem that aims to minimize the training loss of quantized FL under a per-iteration device sampling budget and delay requirement. However, the formulated problem is difficult to solve without (i) a concrete understanding of how quantization impacts global ML performance and (ii) the ability of the server to construct estimates of this process efficiently. To address the first challenge, we analytically characterize how limited wireless resources and induced quantization errors affect the performance of the proposed FL method. Our results quantify how the improvement of FL training loss between two consecutive iterations depends on the device selection and quantization scheme as well as on several parameters inherent to the model being learned. Then, to address the second challenge, we show that the FL training process can be described as a Markov decision process (MDP) and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations. Compared to model-free RL, this model-based RL approach leverages the derived mathematical characterization of the FL training process to discover an effective device selection and quantization scheme without imposing additional device communication overhead. Simulation results show that the proposed FL algorithm can reduce the convergence time by 29% and 63% compared to a model free RL method and the standard FL method, respectively.