An Exponential Speedup in Parallel Running Time for Submodular Maximization without Loss in Approximation.

SODA '19: Symposium on Discrete Algorithms San Diego California January, 2019(2019)

引用 94|浏览71
暂无评分
摘要
In this paper we study the adaptivity of submodular maximization. Adaptivity quantifies the number of sequential rounds that an algorithm makes when function evaluations can be executed in parallel. Adaptivity is a fundamental concept that is heavily studied across a variety of areas in computer science, largely due to the need for parallelizing computation. For the canonical problem of maximizing a monotone submodular function under a cardinality constraint, it is well known that a simple greedy algorithm achieves a 1 - 1/e approximation [NWF78] and that this approximation is optimal for polynomial-time algorithms [NW78]. Somewhat surprisingly, despite extensive efforts on submodular optimization for large-scale datasets, until very recently there was no known algorithm that achieves a constant factor approximation for this problem whose adaptivity is sublinear in the size of the ground set n. Recent work by [BS18] describes an algorithm that obtains an approximation arbitrarily close to 1/3 in O(logn) adaptive rounds and shows that no algorithm can obtain a constant factor approximation in õ(log n) adaptive rounds. This approach achieves an exponential speedup in adaptivity (and parallel running time) at the expense of approximation quality. In this paper we describe a novel approach that yields an algorithm whose approximation is arbitrarily close to the optimal 1 - 1/eguarantee in O(log n) adaptive rounds. This algorithm therefore achieves an exponential speedup in parallel running time for submodular maximization at the expense of an arbitrarily small loss in approximation quality. This guarantee is optimal in both approximation and adaptivity, up to lower order terms.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要