학술논문

Sum Throughput Maximization Scheme for NOMA-Enabled D2D Groups Using Deep Reinforcement Learning in 5G and Beyond Networks
Document Type
Periodical
Source
IEEE Sensors Journal IEEE Sensors J. Sensors Journal, IEEE. 23(13):15046-15057 Jul, 2023
Subject
Signal Processing and Analysis
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Robotics and Control Systems
Device-to-device communication
Resource management
Throughput
Computer architecture
NOMA
Training
Optimization
Device-to-device (D2D)
D2D groups (DGs)
deep deterministic policy gradient (DDPG)
deep Q-network (DQN)
multiagent deep reinforcement learning (MADRL)
nonorthogonal multiple access (NOMA)
proximal online policy scheme (POPS)
signal-to-interference noise ratio (SINR)
Language
ISSN
1530-437X
1558-1748
2379-9153
Abstract
Device-to-device (D2D) communication underlaying cellular network is a capable system for advancing the spectrum’s efficiency. However, in this condition, D2D generates cross-channel and co-channel interference for cellular and other D2D users, which creates an excessive technical challenge for allocating the spectrum. Despite this, massive connectivity is another issue in the 5G and beyond networks that need to be addressed. To overcome this problem, nonorthogonal multiple access (NOMA) is integrated with the D2D groups (DGs). In this article, our target is to maximize the sum throughput of the overall network while maintaining the signal-to-interference noise ratio (SINR) of the cellular and D2D users. To achieve the target, a discriminated spectrum distribution framework dependent on multiagent deep reinforcement learning (MADRL), termed a deep deterministic policy gradient (DDPG), is proposed. Here, it shares the global historical states, actions, and policies using the duration of central training. Furthermore, the proximal online policy scheme (POPS) is used to decrease the computation complexity of training. It used the clipping substitute technique for the modification and reduction of complexity at the training stage. The simulation results demonstrated that the proposed scheme POPS attains 16.67%, 24.98%, and 59.09% higher performance than the DDPG, deep dueling, and deep Q-network (DQN), respectively.