학술논문

Latency-Aware Radio Resource Optimization in Learning-Based Cloud-Aided Small Cell Wireless Networks
Document Type
Periodical
Source
IEEE Transactions on Green Communications and Networking IEEE Trans. on Green Commun. Netw. Green Communications and Networking, IEEE Transactions on. 8(1):542-558 Mar, 2024
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
General Topics for Engineers
Heuristic algorithms
Resource management
Dynamic scheduling
Optimization
Wireless networks
Quality of service
Behavioral sciences
Caching
cloud-aided wireless networks
latency
stochastic optimization
dynamic matching resource allocation
Language
ISSN
2473-2400
Abstract
Low latency communication is one of the fundamental requirements for 5G wireless networks and beyond. In this paper, a novel approach for joint caching, user scheduling and resource allocation is proposed for minimizing the queuing latency in serving users’ requests in cloud-aided wireless networks. Due to the slow temporal variations in user requests, a time-scale separation technique is used to decouple the joint caching problem from user scheduling and radio resource allocation problems. To serve the spatio-temporal user requests under storage limitations, a Reinforcement Learning (RL) approach is used to optimize the caching strategy at the small cell base stations by minimizing the content fetching cost. A spectral clustering algorithm is proposed to speed-up the convergence of the RL algorithm for a large content caching problem by clustering contents based on user requests. Meanwhile, a dynamic mechanism is proposed to locally group coupled base stations based on user requests to collaboratively optimize the caching strategies. To further improve the latency in fetching and serving user requests, a dynamic matching algorithm is proposed to schedule users and to allocate users to radio resources based on user requests and queue lengths under probabilistic latency constraints. Simulation results show the proposed approach significantly reduces the average delay from 21% to 90% compared to random caching strategy, random resource allocation and random scheduling baselines.