SpecFL: An Efficient Speculative Federated Learning System for Tree-based Model Training.

Yuhui Zhang,Lutan Zhao,Cheng Che, XiaoFeng Wang, Dan Meng,Rui Hou

International Symposium on High-Performance Computer Architecture(2024)

Cited 0|Views43
No score
Abstract
Federated tree-based models are popular in many real-world applications owing to their high accuracy and good interpretability. However, the classical synchronous method causes inefficient federated tree model training due to tree node dependencies. Inspired by speculative execution techniques in modern high-performance processors, this paper proposes SpecFL, a novel and efficient speculative federated learning system. Instead of simply waiting, SpecFL optimistically predicts the outcome of the prior tree node. By resolving tree node dependencies with a split point predictor, the training tasks of child tree nodes can be executed speculatively in advance via separate threads. This speculation enables cross-layer concurrent training, thus significantly reducing the waiting time. Furthermore, we propose a greedy speculation policy to exploit speculative training for deeper inter-layer concurrent training and an eager rollback mechanism for lossless model quality. We implement SpecFL and evaluate its efficiency in a real-world federated learning setting with six public datasets. The evaluation results demonstrate that SpecFL can be 2.08-3.33x and 2.14-3.44x faster than the state-of-the-art GBDT and RF implementations, respectively.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined