Unmanned-Aerial-Vehicle-Assisted Computation Offloading For Mobile Edge Computing Based On Deep Reinforcement Learning

IEEE ACCESS(2020)

引用 26|浏览5
暂无评分
摘要
Users in heterogeneous wireless networks may generate massive amounts of data that are delay-sensitive or require computation-intensive processing. Owing to computation ability and battery capacity limitations, wireless users (WUs) cannot easily process such data in a timely manner, and mobile edge computing (MEC) is increasingly being used to resolve this issue. Specifically, data generated by WUs can be offloaded to the MEC server for processing, which has greater computing power than WUs. However, as the location of MEC servers is fixed, unmanned aerial vehicles (UAVs) have been considered a promising solution in heterogeneous wireless networks. In this study, we design an UAV-assisted computation offloading scheme in an MEC framework with renewable power supply. The proposed model considers the instability of energy arrival, stochastic computation tasks generated by WUs, and a time-varying channel state. Owing to the complexity of the state, it is difficult to use traditional Markov decision process (MDP) with complete prior knowledge for offloading optimization. Accordingly, we propose UAV-assisted computation offloading for MEC based on deep reinforcement learning (UACODRL) to minimize the total cost, which is the weighted sum of the delay, energy consumption, and bandwidth cost. We first use the K-Means algorithm for classification to reduce the dimension of the action space. Subsequently, we use UACODRL to find the near-optimal offloading scheme to minimize the total cost. Simulations demonstrate that UACODRL converges satisfactorily and performs better than four baseline schemes with different parameter configurations.
更多
查看译文
关键词
Mobile edge computing, unmanned aerial vehicle, computation offloading, deep reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要