학술논문

Deep Reinforcement Learning for Energy-Efficient Beamforming Design in Cell-Free Networks
Document Type
Conference
Source
2021 IEEE Wireless Communications and Networking Conference Workshops (WCNCW) Wireless Communications and Networking Conference Workshops (WCNCW), 2021 IEEE. :1-6 Mar, 2021
Subject
Communication, Networking and Broadcast Technologies
Array signal processing
Heuristic algorithms
Conferences
Wireless networks
Channel estimation
Reinforcement learning
Energy efficiency
Language
Abstract
Cell-free network is considered as a promising architecture for satisfying more demands of future wireless networks, where distributed access points coordinate with an edge cloud processor to jointly provide service to a smaller number of user equipments in a compact area. In this paper, the problem of uplink beamforming design is investigated for maximizing the long-term energy efficiency (EE) with the aid of deep reinforcement learning (DRL) in the cell-free network. Firstly, based on the minimum mean square error channel estimation and exploiting successive interference cancellation for signal detection, the expression of signal to interference plus noise ratio (SINR) is derived. Secondly, according to the formulation of SINR, we define the long-term EE, which is a function of beam-forming matrix. Thirdly, to address the dynamic beamforming design with continuous state and action space, a DRL-enabled beamforming design is proposed based on deep deterministic policy gradient (DDPG) algorithm by taking the advantage of its double-network architecture. Finally, the results of simulation indicate that the DDPG-based beamforming design is capable of converging to the optimal EE performance. Furthermore, the influence of hyper-parameters on the EE performance of the DDPG-based beamforming design is investigated, and it is demonstrated that an appropriate discount factor and hidden layers size can facilitate the EE performance.