Data Allocation for Approximate Gradient Coding in Edge Networks.

ISIT(2023)

引用 0|浏览14
暂无评分
摘要
To leverage the computing power in an edge network, one can divide a machine learning task into several subtasks and assign the subtasks to several computing devices to complete. Under master-worker architecture, the master divides and distributes the data to several workers. In each iteration, the master asks the workers to compute some function of the local data stored in the workers. For example, in gradient-based learning, this function can be the partial gradient function. Since the workers have different computing resources, the speed of the distributed learning is hindered by some workers with long latency, called the stragglers. Gradient coding solves the problem of stragglers by allowing the master to recover the desired feedback information in the presence of s stragglers. If the total number of stragglers is n, the master can just wait for the n−s fastest workers. In this paper we consider the problem of data allocation so that the gradient vector can be approximated obtained by the master node with small error. A block repetition scheme is proved to be the optimal data allocation scheme if we want to minimize the average recovery error.
更多
查看译文
关键词
approximate gradient coding,average recovery error,block repetition scheme,computing devices,data allocation scheme,distributed learning,edge network,gradient vector,gradient-based learning,machine learning task,master-worker architecture,partial gradient function,stragglers
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要