A Deep Reinforcement Learning Based Framework For Power-Efficient Resource Allocation In Cloud Rans

2017 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC)(2017)

引用 262|浏览90
暂无评分
摘要
Cloud Radio Access Networks (RANs) have become a key enabling technique for the next generation (56) wireless communications, which can meet requirements of massively growing wireless data traffic. However, resource allocation in cloud RANs still needs to be further improved in order to reach the objective of minimizing power consumption and meeting demands of wireless users over a long operational period. Inspired by the success of Deep Reinforcement Learning (DRL) on solving complicated control problems, we present a novel DRL-based framework for power-efficient resource allocation in cloud RANs. Specifically, we define the state space, action space and reward function for the DRL agent, apply a Deep Neural Network (DNN) to approximate the action-value function, and formally formulate the resource allocation problem (in each decision epoch) as a convex optimization problem. We evaluate the performance of the proposed framework by comparing it with two widely-used baselines via simulation. The simulation results show it can achieve significant power savings while meeting user demands, and it can well handle highly dynamic cases.
更多
查看译文
关键词
Deep Reinforcement Learning, Resource Allocation, Cloud Radio Access Network, Green Communications
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要