Efficient Resource Allocation Policy for Cloud Edge End Framework by Reinforcement Learning

2022 IEEE 8th International Conference on Computer and Communications (ICCC)(2022)

引用 0|浏览0
暂无评分
摘要
Recently, Mobile Edge Cloud Computing (MECC) emerges as a promising partial offloading paradigm to provide computing services. However, the design of computation resource allocation policies for the MECC network inevitably encounters a challenging delay-sensitive two-queue optimization problem. Specifically, the coupled computation resource allocation of edge processing queue and cloud processing queue makes it difficult to guarantee the end-to-end delay requirements. This study investigates this problem with the stochasticity of computation request arrival, service time, and dynamic computation resources. We first model the MECC network as a two-stage tandem queue that consists of two sequential computation processing queues with multiple servers. A Deep Reinforcement Learning (DRL) algorithm, is then applied to learn a computation speed adjusting policy for the tandem queue, which can provide end-to-end delay insurance for multiple mobile applications while preventing the total computation resources of edge servers and cloud servers from overuse. Finally, extensive simulation results demonstrate that our approach can achieve better performance than others in dynamic network environment.
更多
查看译文
关键词
Mobile Edge Cloud Computing,Computation Offloading,Resource Allocation,Delay-aware,Deep Reinforcement Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要