On Parallelizing Streaming Algorithms.

Electronic Colloquium on Computational Complexity (ECCC)(2015)

引用 23|浏览32
暂无评分
摘要
We study the complexity of parallelizing streaming algorithms (or equivalently, branching programs). If M(f) denotes the minimum average memory required to compute a function f(x1, x2, . . . , xn) how much memory is required to compute f on k independent streams that arrive in parallel? We show that when the inputs (updates) are sampled independently from some domain X and M(f) = Ω(n), then computing the value of f on k streams requires average memory at least Ω ( k · M(f) n ) . Our results are obtained by defining new ways to measure the information complexity of streaming algorithms. We define two such measures: the transitional and cumulative information content. We prove that any streaming algorithm with transitional information content I can be simulated using average memory O(n(I+ 1)). On the other hand, if every algorithm with cumulative information content I can be simulated with average memory O(I + 1), then computing f on k streams requires average memory at least Ω(k · (M(f)− 1)).
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要