DRL-D: Revenue-Aware Online Service Function Chain Deployment via Deep Reinforcement Learning

IEEE Transactions on Network and Service Management(2022)

引用 3|浏览8
暂无评分
摘要
Network function virtualization (NFV) is a promising paradigm where network functions are migrated from dedicated hardware appliances onto software middleboxes to promote service agility and reduce management costs. Benefiting from the NFV, the service function chain (SFC) has emerged as a popular network service form. It allows network traffic to pass through a series of virtual network functions in a specific order required by the business logic to arrange a complex service. However, SFC deployment is facing new challenges in seeking a trade-off between pursuing the objective of high long-term average revenue and making decisions in an online manner. In this paper, we propose DRL-D, a deep reinforcement learning-based approach for the online SFC deployment problem to satisfy different demands of SFC requests within resource constraints of the underlying infrastructure. DRL-D aims to maximize the long-term average revenue by combining the strengths of the graph convolutional network in learning a comprehensive representation of network state and the temporal-difference learning in generating deployment solutions for the SFC requests on the fly. Then a heuristic algorithm and a new prioritized experience replay technique are integrated to optimize the DRL framework and reduce the time complexity. Experimental results demonstrate the superiority of our DRL-D approach when compared with other benchmarks in terms of the long-term average revenue, acceptance ratio, and revenue-to-cost ratio. Performance evaluation shows that DRL-D possesses good robustness under different scales of physical networks and achieves excellent deployment performance within acceptable runtime.
更多
查看译文
关键词
Network function virtualization,service function chain,deep reinforcement learning,graph convolutional network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要