학술논문

Automatic Layer Freezing for Communication Efficiency in Cross-Device Federated Learning
Document Type
Periodical
Source
IEEE Internet of Things Journal IEEE Internet Things J. Internet of Things Journal, IEEE. 11(4):6072-6083 Feb, 2024
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
Training
Servers
Optimization
Costs
Synchronization
Internet of Things
Federated learning
Communication
convolutional neural networks (ConvNets)
federated learning (FL)
optimization
Language
ISSN
2327-4662
2372-2541
Abstract
Federated learning (FL) is a collaborative machine learning paradigm where network-edge clients train a global model under the orchestration of a central server. Unlike traditional distributed learning, each participating client keeps its data locally, ensuring privacy protection by default. However, state-of-the-art FL implementations suffer from massive information exchange between clients and the server. This issue prevents the adoption in constrained environments, typical of the Internet of Things domain, where the communication bandwidth and the energy budget are severely limited. To achieve higher efficiency at scale, the future of FL calls for additional optimizations to reach high-quality learning capability with lower communication pressure. To address this challenge, we propose automatic layer freezing (ALF), an embedded mechanism that gradually drops a growing portion of the model out of the training and synchronization phases of the learning loop, reducing the volume of exchanged data with the central server. ALF monitors the evolution of model updates and identifies layers that have reached a stable representation, where further weight updates would have minimal impact on accuracy. By freezing these layers, ALF achieves substantial savings in communication bandwidth and energy consumption. The proposed implementation of the ALF mechanism is compatible with any FL strategy, requiring minimal effort and without interfering with existing optimizations. The extensive experiments conducted using a representative set of FL strategies applied to two image classification tasks show that ALF improves the communication efficiency of the baseline FL implementations, ensuring up to 83.91% of data volume savings with no or marginal losses of accuracy.