Gradient Coding in Decentralized Learning for Evading Stragglers
CoRR(2024)
摘要
In this paper, we consider a decentralized learning problem in the presence
of stragglers. Although gradient coding techniques have been developed for
distributed learning to evade stragglers, where the devices send encoded
gradients with redundant training data, it is difficult to apply those
techniques directly to decentralized learning scenarios. To deal with this
problem, we propose a new gossip-based decentralized learning method with
gradient coding (GOCO). In the proposed method, to avoid the negative impact of
stragglers, the parameter vectors are updated locally using encoded gradients
based on the framework of stochastic gradient coding and then averaged in a
gossip-based manner. We analyze the convergence performance of GOCO for
strongly convex loss functions. And we also provide simulation results to
demonstrate the superiority of the proposed method in terms of learning
performance compared with the baseline methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要