Reinforcement Learning-Based Optimal Computing and Caching in Mobile Edge Network

IEEE Journal on Selected Areas in Communications(2020)

引用 48|浏览19
暂无评分
摘要
Joint pushing and caching are commonly considered an effective way to adapt to tidal effects in networks. However, the problem of how to precisely predict users' future requests and push or cache the proper content remains to be solved. In this paper, we investigate a joint pushing and caching policy in a general mobile edge computing (MEC) network with multiuser and multicast data. We formulate the joint pushing and caching problem as an infinite-horizon average-cost Markov decision process (MDP). Our aim is not only to maximize bandwidth utilization but also to decrease the total quantity of data transmitted. Then, a joint pushing and caching policy based on hierarchical reinforcement learning (HRL) is proposed, which considers both long-term file popularity and short-term temporal correlations of user requests to fully utilize bandwidth. To address the curse of dimensionality, we apply a divide-and-conquer strategy to decompose the joint base station and user cache optimization problem into two subproblems: the user cache optimization subproblem and the base station cache optimization subproblem. We apply value function approximation Q-learning and a deep Q-network (DQN) to solve these two subproblems. Furthermore, we provide some insights into the design of deep reinforcement learning in network caching. The simulation results show that the proposed policy can learn content popularity very well and predict users' future demands precisely. Our approach outperforms existing schemes on various parameters including the base station cache size, the number of users and the total number of files in multiple scenarios.
更多
查看译文
关键词
Joint pushing and caching,deep reinforcement learning,mobile edge network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要