Deep reinforcement learning based adaptive threshold multi-tasks offloading approach in MEC

Computer Networks(2023)

引用 1|浏览0
暂无评分
摘要
Offloading tasks and data is an important use case for Mobile Edge Computing (MEC) that benefits a wide variety of mobile applications on different platforms, including autonomous vehicles and smartphones. A moving vehicle may pass through multiple Roadside Units (RSUs) and the MEC servers located within them as it travels. On-board applications generate structured tasks and can offload them to observable servers. However, since the mobile node cannot determine the information of the global server, choosing the best server becomes a key issue. In this paper, we address the problem of the offloading threshold in traditional Optimal Stopping Theory (OST) based decision models that cannot adapt to the environment. We use Deep Q-Network (DQN) to provide the model with the optimal offloading threshold for adaptive environments and propose a DQN-based Adaptive Threshold Multi-task Offloading approach (DQN-ATMOO). This method uses the principle of OST to solve the offloading order decision problem, divides structured tasks into sub-tasks, and executes offloading in sequence, with the goal of maximizing the probability of offloading to the best server and minimizing the total offloading delay. We experimentally compare and evaluate the proposed model using simulated and real datasets. The results show that the proposed model can be effectively implemented on mobile nodes and significantly reduces the total expected processing time.
更多
查看译文
关键词
Mobile edge computing,Optimal stopping theory,Deep Q network,Task offloading,Sequential decision making
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要