학술논문

End-to-End Deep Learning Proactive Content Caching Framework
Document Type
Conference
Source
GLOBECOM 2022 - 2022 IEEE Global Communications Conference Global Communications Conference(48099), GLOBECOM 2022 - 2022 IEEE. :1043-1048 Dec, 2022
Subject
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Engineering Profession
General Topics for Engineers
Power, Energy and Industry Applications
Signal Processing and Analysis
Deep learning
Wireless communication
Performance evaluation
Privacy
Preforms
Performance gain
Probability distribution
Proactive Content Caching
Deep Learning
Language
Abstract
Proactive content caching has been proposed as a promising solution to cope with the challenges caused by the rapid surge in content access using wireless and mobile devices and to prevent significant revenue loss for content providers. In this paper, we propose an end-to-end Deep Learning framework for proactive content caching that models the dynamic interaction between users and content items, particularly their features. The proposed model performs the caching task by building a probability distribution across different content items, per user, via a Deep Neural Network model and supports, both, centralized and distributed caching schemes. In addition, the paper addresses the key question: Do we need an explicit user-item pairs-based recommendation system in content caching? i.e., do we need to develop a recommendation system while tackling the content caching problem? To this end, an end-to-end Deep Learning framework is introduced. Finally, we validate our approach through extensive experiments on a real-world, public data set, coined MovieLens. Our experiments show consistent performance gains against its counterparts, where our proposed Deep Learning Caching module, dubbed as DLC, significantly outperforms state-of-the-art content caching schemes, serving as a baseline. Our code is available here: https://github.com/heshameraqi/Proactive-Content-Caching-with-Deep-Learning.