Faster Parallel Solver for Positive Linear Programs via Dynamically-Bucketed Selective Coordinate Descent.

arXiv: Data Structures and Algorithms(2015)

引用 23|浏览51
暂无评分
摘要
We provide improved parallel approximation algorithms for the important class of packing and covering linear programs. In particular, we present new parallel $epsilon$-approximate packing and covering solvers which run in $tilde{O}(1/epsilon^2)$ expected time, i.e., in expectation they take $tilde{O}(1/epsilon^2)$ iterations and they do $tilde{O}(N/epsilon^2)$ total work, where $N$ is the size of the constraint matrix and $epsilon$ is the error parameter, and where the $tilde{O}$ hides logarithmic factors. To achieve our improvement, we introduce an algorithmic technique of broader interest: dynamically-bucketed selective coordinate descent (DB-SCD). At each step of the iterative optimization algorithm, the DB-SCD method dynamically buckets the coordinates of the gradient into those of roughly equal magnitude, and it updates all the coordinates in one of the buckets. This dynamically-bucketed updating permits us to take steps along several coordinates with similar-sized gradients, thereby permitting more appropriate step sizes at each step of the algorithm. In particular, this technique allows us to use in a straightforward manner the recent analysis from the breakthrough results of Allen-Zhu and Orecchia [2] to achieve our still-further improved bounds. More generally, this method addresses among coordinates, by which we mean the impact of the update of one coordinate on the gradients of other coordinates. Such interference is a core issue in parallelizing optimization routines that rely on smoothness properties. Since our DB-SCD method reduces interference via updating a selective subset of variables at each iteration, we expect it may also have more general applicability in optimization.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要