Deep Reinforcement Learning for Multi-Agent Non-Cooperative Power Control in Heterogeneous Networks

arxiv(2020)

引用 0|浏览9
暂无评分
摘要
We consider a typical heterogeneous network (HetNet), in which multiple access points (APs) are deployed to serve users by reusing the same spectrum band. Since different APs and users may cause severe interference to each other, advanced power control techniques are needed to manage the interference and enhance the sum-rate of the whole network. Conventional power control techniques first collect instantaneous global channel state information (CSI) and then calculate sub-optimal solutions. Nevertheless, it is challenging to collect instantaneous global CSI in the HetNet, in which global CSI typically changes fast. In this paper, we exploit deep reinforcement learning (DRL) to design a multi-agent non-cooperative power control algorithm in the HetNet. To be specific, by treating each AP as an agent with a local deep neural network (DNN), we propose a multiple-actor-shared-critic (MASC) method to train the local DNNs separately in an online trial-and-error manner. With the proposed algorithm, each AP can independently use the local DNN to control the transmit power with only local observations. Simulations results show that the proposed algorithm outperforms the conventional power control algorithms in terms of both the converged average sum-rate and the computational complexity.
更多
查看译文
关键词
heterogeneous networks,reinforcement learning,multi-agent,non-cooperative
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要