Packet Loss Burstiness: Measurements and Implications for Distributed Applications

Long Beach, CA(2007)

引用 11|浏览12
暂无评分
摘要
Manymodernmassivelydistributedsystems deploythou- sands of nodes to cooperate on a computation task. Net- work congestions occur in these systems. Most applications rely on congestion control protocols such as TCP to protect the systems from congestion collapse. Most TCP conges- tion control algorithms use packet loss as signal to detect congestion. In this paper, we study the packet loss process in sub- round-trip-time (sub-RTT) timescale and its impact on the loss-based congestion control algorithms. Our study sug- gests that the packet loss in sub-RTT timescale is very bursty. This burstiness leads to two effects. First, the sub- RTT burstiness in packet loss process leads to complicated interactions between different loss-based algorithms. Sec- ond, the sub-RTT burstiness in packet loss process makes the latency of data transfers under TCP hard to predict. Our results suggest that the design of a distributed sys- tem has to seriously consider the nature of packet loss pro- cess and carefully select the congestion control algorithms best suited for the distributed computation environments.
更多
查看译文
关键词
Internet,telecommunication congestion control,transport protocols,Internet,TCP congestion control algorithms,distributed computation environments,distributed system,loss-based congestion control algorithms,network congestions,packet loss burstiness,sub-RTT timescale,sub-round-trip-time
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要