RLDRM: Closed Loop Dynamic Cache Allocation with Deep Reinforcement Learning for Network Function Virtualization

2020 6th IEEE Conference on Network Softwarization (NetSoft)(2020)

引用 10|浏览71
暂无评分
摘要
Network function virtualization (NFV) technology attracts tremendous interests from telecommunication industry and data center operators, as it allows service providers to assign resource for Virtual Network Functions (VNFs) on demand, achieving better flexibility, programmability, and scalability. To improve server utilization, one popular practice is to deploy best effort (BE) workloads along with high priority (HP) VNFs when high priority VNF's resource usage is detected to be low. The key challenge of this deployment scheme is to dynamically balance the Service level objective (SLO) and the total cost of ownership (TCO) to optimize the data center efficiency under inherently fluctuating workloads. With the recent advancement in deep reinforcement learning, we conjecture that it has the potential to solve this challenge by adaptively adjusting resource allocation to reach the improved performance and higher server utilization. In this paper, we present a closed-loop automation system RLDRM 11 RLDRM: Reinforcement Learning Dynamic Resource Management to dynamically adjust Last Level Cache allocation between HP VNFs and BE workloads using deep reinforcement learning. The results demonstrate improved server utilization while maintaining required SLO for the HP VNFs.
更多
查看译文
关键词
deep reinforcement learning,network function virtualization technology,telecommunication industry,data center operators,service providers,high priority VNF resource usage,deployment scheme,service level objective,data center efficiency,resource allocation,higher server utilization,closed-loop automation system RLDRM,reinforcement learning dynamic resource management,level cache allocation,HP VNF,virtual network functions,loop dynamic cache allocation,SLO,total cost of ownership,TCO
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要