학술논문

A Hybrid Deep Reinforcement Learning Approach for Jointly Optimizing Offloading and Resource Management in Vehicular Networks
Document Type
Periodical
Source
IEEE Transactions on Vehicular Technology IEEE Trans. Veh. Technol. Vehicular Technology, IEEE Transactions on. 73(2):2456-2467 Feb, 2024
Subject
Transportation
Aerospace
Task analysis
Computational modeling
Servers
Resource management
Optimization
Bandwidth
Delays
Multiple-access edge computing
caching
software-defined networking
deep reinforcement learning
Language
ISSN
0018-9545
1939-9359
Abstract
Satisfying the quality of service of data-intensive autonomous driving applications has become challenging. In this work, we propose a novel methodology that optimizes communication, computation, and caching configurations in a vehicular Multi-access edge computing (MEC) system to minimize the average latency of the tasks from the vehicles and maximize the number of tasks finished within the latency requirements. The communication model characterizes bandwidth and power allocation of uplink and downlink transmission in the vehicular MEC system. Our caching model includes variables for each edge server in determining the trade-off between flexibility and hit rate. Finally, the computation model characterizes computation resource allocation. Our method for solving the optimization problem consists of two main steps. First, the deep Q-learning algorithm deals with the optimal assignment of tasks to the edge servers. Then, a greedy approach is applied to the communication, computation, and caching subproblems to decide the bandwidth and power, CPU, and caching strategy, respectively. Simulation results show that our algorithm outperforms several baselines in minimizing latency and maximizing the number of tasks finished within latency requirements, and verify the benefit of including different resource allocation variables in our optimization.