Dependency-aware Online Task Offloading based on Deep Reinforcement Learning for IoV

crossref(2024)

引用 0|浏览0
暂无评分
摘要
Abstract The integration of artificial intelligence with vehicle wireless communicationtechnology willing meet the intelligent communication needs of the Internetof Vehicles (IoV), supporting more sophisticated vehicle applications. However,making real-time dependency-aware task offloading decisions is difficult due tothe high mobility of vehicles and the dynamic nature of the network environment.This leads to additional application computation time and energy consumption,increasing the risk of offloading failures for computationally intensive andlatency-sensitive applications. In this paper, an offloading strategy for vehicleapplications that jointly considers latency and energy consumption in the basestation cooperative computing model is proposed. Firstly, we establish a collaborativeoffloading model involving multiple vehicles, multiple base stations,and multiple edge servers. Transferring vehicular applications to the applicationqueue of edge servers and prioritizing them based on their completion deadlines.Secondly, each vehicular application is modeled as a directed acyclic graph(DAG) task with data dependency relationships. Subsequently, we propose atask offloading method based on task dependency awareness in deep reinforcementlearning (DAG-DQN). Tasks are assigned to edge servers at different base stations, and edge servers collaborate to process tasks, minimizing vehicle applicationcompletion time and reducing edge server energy consumption. Finally,simulation results show that compared with the heuristic method, our proposedDAG-DQN method reduces task completion time by 16%, reduces system energyconsumption by 19%, and improves decision-making efficiency by 70%.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要