SpecFL: An Efficient Speculative Federated Learning System for Tree-based Model Training.

International Symposium on High-Performance Computer Architecture(2024)

引用 0|浏览11
暂无评分
摘要
Federated tree-based models are popular in many real-world applications owing to their high accuracy and good interpretability. However, the classical synchronous method causes inefficient federated tree model training due to tree node dependencies. Inspired by speculative execution techniques in modern high-performance processors, this paper proposes SpecFL, a novel and efficient speculative federated learning system. Instead of simply waiting, SpecFL optimistically predicts the outcome of the prior tree node. By resolving tree node dependencies with a split point predictor, the training tasks of child tree nodes can be executed speculatively in advance via separate threads. This speculation enables cross-layer concurrent training, thus significantly reducing the waiting time. Furthermore, we propose a greedy speculation policy to exploit speculative training for deeper inter-layer concurrent training and an eager rollback mechanism for lossless model quality. We implement SpecFL and evaluate its efficiency in a real-world federated learning setting with six public datasets. The evaluation results demonstrate that SpecFL can be 2.08-3.33x and 2.14-3.44x faster than the state-of-the-art GBDT and RF implementations, respectively.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要