Fairness Analysis of Deep Reinforcement Learning based Multi-Path QUIC Scheduling

38TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2023(2023)

引用 0|浏览6
暂无评分
摘要
Computing devices with multiple active network interfaces, such as cellular, wired, and WiFi, are becoming more and more common. Typically, such devices select a single interface for communication, but throughput and availability can increase by utilizing multipath protocols. Multipath TCP (MPTCP) is the predominant protocol in this space; however, Multipath QUIC (MPQUIC) provides several advantages over MPTCP and is increasing in adoption. Multipath protocols use a multipath scheduler to determine which packets use which interface. Legacy schedulers exhibit good performance but often poorly handle adjusting to dynamic changes in the network. Recent research includes the development of several Deep Reinforcement Learning (DRL) based schedulers that outperform legacy schedulers and improve adaptability to changing network conditions. Evaluation of any packet scheduling approach must include an assessment of fairness to concurrent TCP flows. Specifically, under congestion conditions, all flows (multipath or unipath) should tend toward an equal share of the bandwidth. Unfortunately, MPQUIC DRL-based scheduler research does not include a rigorous analysis of the fairness aspect under various network conditions, risking significant network problems as adoption increases. We present an efficiency and fairness comparison of MPQUIC using DRL-based schedulers with classic agents like DQN, Deep SARSA, and Double DQN. Experimental results over a bi-path network show that these schedulers are TCP-friendly in many cases on both paths and converge to link-centric fairness on one path. However, our work shows that they are not TCP-friendly or can be bullied under certain conditions, degrading TCP or MPQUIC performance.
更多
查看译文
关键词
Multipath QUIC,Deep Reinforcement Learning,Fairness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要