학술논문

Research on Offloading Strategy of Twin UAVs Edge Computing Tasks for Emergency Communication
Document Type
Periodical
Source
IEEE Transactions on Network and Service Management IEEE Trans. Netw. Serv. Manage. Network and Service Management, IEEE Transactions on. 21(1):684-696 Feb, 2024
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Federated learning
Digital twins
Edge computing
Task analysis
Real-time systems
Deep learning
Costs
AWMEN
TD3-BC-R
AC-R
TD3-R
DDPG-R
Language
ISSN
1932-4537
2373-7379
Abstract
Aiming to solve the problem of interruption of normal communication service caused by the damage of ground communication facilities after disaster, an Air-Ground Integrated Mobile Edge Network (AWMEN) offloading model was established under the constraints of communication security, energy consumption and coverage. The traditional method needs to be re-iterated every time the preset environmental state changes, which will waste a lot of communication resources and computing resources, greatly reduce the efficiency, and face the risk of data privacy disclosure. However, the deep reinforcement learning method under the federated learning framework will be more flexible and applicable to dynamic scenarios. A Markov decision process model is constructed based on the unmanned aerial vehicles (UAV) and the environment. The experience trajectory is designed by interacting with the external environment, and the optimal offloading strategy is obtained. The Twin Delayed Deep Deterministic Policy Gradient of behavior cloning (TD3-BC-R) is compared with baseline method (0-1 mode), Actor-Critic (AC-R), Deep Deterministic Policy Gradient (DDPG-R) and Twin Delayed Deep Deterministic Policy Gradient (TD3-R), the experiment shows that, The total time cost of TD3-BC-R is reduced by more than 1/3, and low latency transmission is also achieved.