학술논문

Task Offloading and Resource Allocation for Fog Computing in NG Wireless Networks: A Federated Deep Reinforcement Learning Approach
Document Type
Periodical
Source
IEEE Internet of Things Journal IEEE Internet Things J. Internet of Things Journal, IEEE. 11(4):6802-6816 Feb, 2024
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
Task analysis
Resource management
Energy consumption
Cloud computing
Internet of Things
Delays
Training
Federated learning
federated hierarchical deep deterministic policy gradient (FHDDPG)
next generation (NG) network
task offloading (TO)
transmission rate proportion allocation
Language
ISSN
2327-4662
2372-2541
Abstract
Task offloading (TO) is beneficial to reducing the delay and energy consumption for the prosperity of the applications in next generation (NG) wireless networks. However, existing TO approaches are inability to exhibit low complexity and stable performance. To this end, a novel federated hierarchical deep deterministic policy gradient (FHDDPG) algorithm for TO and resource allocation (RA) is proposed in this article. To be specific, three deep deterministic policy gradient (DDPG) modules are deployed in parallel to make offloading decision on the execution mode of tasks and the proportion allocation of the transmission rate. Subsequently, a federated learning method is proposed to collaboratively train the HDDPG model by means of sharing models’ weights. Meanwhile, the delay and the energy consumption are comprehensively considered as the average system consumption, which is defined as a reward metric of FHDDPG. Finally, extensive simulations are conducted to demonstrate the effectiveness of our proposal. The experimental results indicate that the average system consumption of FHDDPG is cut down by 11.4% and 18% compare with HDDPG and DDPG, respectively, which means FHDDPG can achieve a better performance effectively.