The Cost of Parallelizing Boosting

Xin Lyu,Hongxun Wu, Junzhao Yang

CoRR(2024)

引用 0|浏览5
暂无评分
摘要
We study the cost of parallelizing weak-to-strong boosting algorithms for learning, following the recent work of Karbasi and Larsen. Our main results are two-fold: - First, we prove a tight lower bound, showing that even "slight" parallelization of boosting requires an exponential blow-up in the complexity of training. Specifically, let γ be the weak learner's advantage over random guessing. The famous AdaBoost algorithm produces an accurate hypothesis by interacting with the weak learner for Õ(1 / γ^2) rounds where each round runs in polynomial time. Karbasi and Larsen showed that "significant" parallelization must incur exponential blow-up: Any boosting algorithm either interacts with the weak learner for Ω(1 / γ) rounds or incurs an exp(d / γ) blow-up in the complexity of training, where d is the VC dimension of the hypothesis class. We close the gap by showing that any boosting algorithm either has Ω(1 / γ^2) rounds of interaction or incurs a smaller exponential blow-up of exp(d). -Complementing our lower bound, we show that there exists a boosting algorithm using Õ(1/(t γ^2)) rounds, and only suffer a blow-up of exp(d · t^2). Plugging in t = ω(1), this shows that the smaller blow-up in our lower bound is tight. More interestingly, this provides the first trade-off between the parallelism and the total work required for boosting.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要