Parallel Stochastic Asynchronous Coordinate Descent

SIAM Journal on Optimization(2021)

引用 0|浏览28
暂无评分
摘要
Several works have shown linear speedup is achieved by an asynchronous parallel implementation of stochastic coordinate descent so long as there is not too much parallelism. More specifically, it is known that if all updates are of similar duration, then linear speedup is possible with up to $\Theta(L_{\max}\sqrt n/L_{\overline{{res}}})$ processors, where $L_{\max}$ and $L_{\overline{{res}}}$ are suitable Lipschitz parameters. This paper shows the bound is tight for almost all possible values of these parameters.
更多
查看译文
关键词
stochastic asynchronous coordinate descent, parallelism bound
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要