Hierarchical Deep Reinforcement Learning for Joint Service Caching and Computation Offloading in Mobile Edge-Cloud Computing

IEEE Transactions on Services Computing(2024)

引用 0|浏览12
Mobile edge-cloud computing networks can provide distributed, hierarchical, and fine-grained resources, and have become a major goal for future high-performance computing networks. The key is how to jointly optimize service caching and computation offloading. However, the joint service caching and computation offloading problem faces three significant challenges of dynamic tasks, heterogeneous resources, and coupled decisions. In this paper, we investigate the issue of joint service caching and computation offloading in mobile edge-cloud computing networks. Specifically, we formulate the optimization problem as minimizing the long-term average service latency, which is NP-hard. To solve the problem, we conduct in-depth theoretical analyses and decompose it into two sub-problems: service caching processing and computation offloading processing. We are the first to propose a novel hierarchical deep reinforcement learning algorithm to solve the formulated problem, where multiple edge agents and a cloud agent collaboratively determine the caching-action and offloading-action, respectively. The results obtained through trace-driven simulations reveal that the proposed framework outperforms several prevailing algorithms concerning the average service latency across diverse scenarios. In a complex real scenario, our framework achieves an approximately 33% convergence improvement and a remarkable 39% reduction in the average service latency when compared to reinforcement learning-based algorithms.
Mobile edge-cloud computing,service caching,computation offloading,hierarchical deep reinforcement learning
AI 理解论文
Chat Paper