학술논문

Partial Computation Offloading in NOMA-Assisted Mobile-Edge Computing Systems Using Deep Reinforcement Learning
Document Type
Periodical
Source
IEEE Internet of Things Journal IEEE Internet Things J. Internet of Things Journal, IEEE. 8(17):13196-13208 Sep, 2021
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
NOMA
Resource management
Task analysis
Energy consumption
Delays
Servers
Computational modeling
Deep reinforcement learning (DRL)
mobile-edge computing (MEC)
nonorthogonal multiple access (NOMA)
partial computation offloading
resource allocation
Language
ISSN
2327-4662
2372-2541
Abstract
Mobile-edge computing (MEC) and nonorthogonal multiple access (NOMA) have been regarded as promising technologies for beyond fifth-generation (B5G) and sixth-generation (6G) networks. This study aims to reduce the computational overhead (weighted sum of consumed energy and latency) in a NOMA-assisted MEC network by jointly optimizing the computation offloading policy and channel resource allocation under dynamic network environments with time-varying channels. To this end, we propose a deep reinforcement learning algorithm named ACDQN that utilizes the advantages of both actor–critic and deep $Q$ -network methods and provides low complexity. The proposed algorithm considers partial computation offloading, where users can split computation tasks so that some are performed on the local terminal while some are offloaded to the MEC server. It also considers a hybrid multiple access scheme that combines the advantages of NOMA and orthogonal multiple access to serve diverse user requirements. Through extensive simulations, it is shown that the proposed algorithm stably converges to its optimal value, provides approximately 10%, 27%, and 69% lower computational overhead than the prevalent schemes, such as full offloading with NOMA, random offloading with NOMA, and fully local execution, and achieves near-optimal performance.