학술논문

On-Board Federated Learning in Orbital Edge Computing
Document Type
Conference
Source
2023 IEEE 29th International Conference on Parallel and Distributed Systems (ICPADS) ICPADS Parallel and Distributed Systems (ICPADS), 2023 IEEE 29th International Conference on. :1045-1052 Dec, 2023
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Training
Satellite constellations
Power demand
Satellites
Federated learning
Computational modeling
Low earth orbit satellites
Low-Earth Orbit
Federated Learning
Orbital Edge Computing
Energy Consumption
Communication Cost
Language
ISSN
2690-5965
Abstract
Low Earth Orbit (LEO) satellite constellations are used for a wide range of applications including earth observation, communication services, navigation, and positioning. They have emerged as a new source of data but transferring this data to a ground station (GS) for analysis and machine learning requires extensive bandwidth and incurs high latency. Limited battery capacity, communication and computing capabilities are other factors affecting the training process. Federated Learning (FL) is being used to address these challenges, although it heavily relies on the GS for model aggregation. In this paper, we consider Orbital Edge Computing (OEC) as an architecture for LEO satellite constellations and propose an on-board Federated Learning to reduce communication with the GS. We present a novel decentralised FL algorithm, called FedOrbit, based on reinforcement learning cluster formation and satellite visiting patterns to utilise intra and inter-satellite communications for model aggregation. Extensive performance evaluation under Walker Delta-based LEO constellation configurations and different datasets including MNIST, CIFAR-10, and EuroSat revealed that FedOrbit can significantly reduce communication rounds, power consumption and training time in comparison to state-of-the-art FL approaches while maintaining a high accuracy. FedOrbit demonstrates a significant decrease in power consumption, specifically by 8.8% and 79.1% for the MNIST dataset, when compared to decentralised and centralised FL approaches, respectively. The proposed technique can also reduce the training time by 5× and 48× compared with the decentralised and centralised FL approaches, respectively.