학술논문

Communication-Efficient Federated Learning for UAV Networks with Knowledge Distillation and Transfer Learning
Document Type
Conference
Source
GLOBECOM 2023 - 2023 IEEE Global Communications Conference Global Communications Conference, GLOBECOM 2023 - 2023 IEEE. :5739-5744 Dec, 2023
Subject
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Engineering Profession
General Topics for Engineers
Power, Energy and Industry Applications
Signal Processing and Analysis
Adaptation models
Costs
Federated learning
Bit rate
Transfer learning
Switches
Autonomous aerial vehicles
UAV Networks
Federated Learning
Model Switching
Knowledge Distillation
Transfer Learning
Language
ISSN
2576-6813
Abstract
Federated learning (FL) in unmanned aerial ve-hicles (UAVs) networks demands considerable communication resources to transfer model data between the central server and UAVs (FL clients). However, different UAVs may have different communication capabilities due to the UAV's maneuverability and heterogeneity, where limited communication resource could be bottle neck for FL performance. In this paper, we first introduce a knowledge distillation based approach that places two different models with different sizes, namely the teacher model and student model, for FL client to make a trade-off between the FL performance and communication cost. Then, we propose a novel model switching method to switch between the teacher model and student model to adapt to the dynamic feature of UAV networks. Specifically, considering available communication bitrate and learning accuracy, we design a threshold-based model switching algorithm (TBMSA) and determine the threshold based on the k-means method (DTBKM) to accurately and quickly determine the switching point. In addition, for the knowledge transfer between models, we design a knowledge inheritance based on a transfer learning (KIBTL) algorithm, which transfers knowledge from one model to another. Experiments show that the proposed model switching algorithm achieves significant performance improvements as compared to existing baselines.