Attention Mechanism-Aided Deep Reinforcement Learning for Dynamic Edge Caching

IEEE INTERNET OF THINGS JOURNAL(2024)

引用 0|浏览0
暂无评分
摘要
The dynamic mechanism of joint proactive caching and cache replacement, which involves placing content items close to cache-enabled edge devices ahead of time until they are requested, is a promising technique for enhancing traffic offloading and relieving heavy network loads. However, due to limited edge cache capacity and wireless transmission resources, accurately predicting users' future requests and performing dynamic caching is crucial to effectively utilizing these limited resources. This article investigates joint proactive caching and cache replacement strategies in a general mobile-edge computing (MEC) network with multiple users under a cloud-edge-device collaboration architecture. The joint optimization problem is formulated as a Markov decision process (MDP) problem with an infinite range of average network load costs, aiming to reduce network load traffic while efficiently utilizing the limited available transport resources. To address this issue, we design an attention-weighted deep deterministic policy gradient (AWD2PG) model, which uses attention weights to allocate the number of channels from server to user, and applies deep deterministic policies on both user and server sides for Cache decision-making, so as to achieve the purpose of reducing network traffic load and improving network and cache resource utilization. We verify the convergence of the corresponding algorithms and demonstrate the effectiveness of the proposed AWD2PG strategy and benchmark in reducing network load and improving hit rate.
更多
查看译文
关键词
Servers,Wireless communication,Optimization,Load modeling,Resource management,Internet of Things,Telecommunication traffic,Attention-weighted channel assignment,deep reinforcement learning,edge caching,wireless network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要