학술논문

Adaptive Resource Allocation for Mobile Edge Computing in Internet of Vehicles: A Deep Reinforcement Learning Approach
Document Type
Periodical
Source
IEEE Transactions on Vehicular Technology IEEE Trans. Veh. Technol. Vehicular Technology, IEEE Transactions on. 73(4):5834-5848 Apr, 2024
Subject
Transportation
Aerospace
Resource management
Task analysis
Optimization
Delays
Adaptive systems
Vehicle dynamics
Uplink
Mobile edge computing (MEC)
Internet of Vehicles (IoV)
adaptive joint resource allocation
deep reinforcement learning (DRL)
Language
ISSN
0018-9545
1939-9359
Abstract
Mobile edge computing (MEC) has emerged in recent years as an effective solution to the challenge of limited vehicle resources in the Internet of Vehicles (IoV), especially for computation-intensive vehicle tasks. This paper investigates a multi-user MEC system with an active task model in high-dynamic IoV scenarios. To improve the MEC performance regarding system capacity, task service delay, and energy consumption, we design an adaptive joint resource allocation scheme based on deep reinforcement learning (DRL), which includes uplink, computing, and downlink resource allocation. Further, a multi-actor parallel twin delayed deep deterministic policy gradient (MAPTD3) algorithm is devised to jointly and adaptively optimize these strategies during each time slot. Finally, numerical results demonstrate that the proposed adaptive joint resource allocation scheme improves system performance significantly while satisfying task delay and system resource constraints. In addition, the space complexity of the designed optimization algorithm is lower than that of conventional DRL algorithms.