학술논문

Communication-Efficient Personalized Federated Meta-Learning in Edge Networks
Document Type
Periodical
Source
IEEE Transactions on Network and Service Management IEEE Trans. Netw. Serv. Manage. Network and Service Management, IEEE Transactions on. 20(2):1558-1571 Jun, 2023
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Data models
Adaptation models
Training
Computational modeling
Biological system modeling
Federated learning
Task analysis
Edge networks
federated meta learning
representation learning
autoencoder
differential privacy
Language
ISSN
1932-4537
2373-7379
Abstract
Due to the privacy breach risks and data aggregation of traditional centralized machine learning (ML) approaches, applications, data and computing power are being pushed from centralized data centers to network edge nodes. Federated Learning (FL) is an emerging privacy-preserving distributed ML paradigm suitable for edge network applications, which is able to address the above two issues of traditional ML. However, the current FL methods cannot flexibly deal with the challenges of model personalization and communication overhead in the network applications. Inspired by the mixture of global and local models, we proposed a Communication-Efficient Personalized Federated Meta-Learning algorithm to obtain a novel personalized model by introducing the personalization parameter. We can improve model accuracy and accelerate its convergence by adjusting the size of the personalized parameter. Further, the local model to be uploaded is transformed into the latent space through autoencoder, thereby reducing the amount of communication data, and further reducing communication overhead. And local and task-global differential privacy are applied to provide privacy protection for model generation. Simulation experiments demonstrate that our method can obtain better personalized models at a lower communication overhead for edge network applications, while compared with several other algorithms.