Efficient Replication for Straggler Mitigation in Distributed Computing

arxiv(2020)

引用 0|浏览22
暂无评分
摘要
The potential of distributed computing to improve the performance of big data processing engines is contingent on mitigation of several challenges. In particular, by relying on multiple commodity servers, the performance of a distributed computing engine is dictated by the slowest servers, known as stragglers. Redundancy could mitigate stragglers by reducing the dependence of the computing engine on every server. Nevertheless, redundancy could yet be a burden to the system and aggravate stragglers. In this paper, we consider task replication as the redundancy technique and study the optimum redundancy planning to improve the performance of a master-worker distributed computing system. We start with optimum assignment policy of a given set of redundant tasks to a set of workers. Using the results from majorization theory, we show that if the service time of workers is a stochastically (decreasing and) convex random variable, a balanced assignment of non-overlapping batches of tasks minimizes the average job compute time. With that results, we then study the optimum level of redundancy from the perspective of average job compute time and compute time predictability. We derive the efficient redundancy level as a function of tasks' service time distribution. We observe that, the redundancy level that minimizes average compute time is not necessarily the same as the redundancy level that maximized compute time predictability. Finally, by running experiments on Google cluster traces, we show that a careful planning of redundancy according to the tasks' service time distribution can speed up the computing job by an order of magnitude.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要