SuperScaler: Supporting Flexible DNN Parallelization via a Unified Abstraction

Zhiqi Lin,Youshan Miao, Guodong Liu, Xiaoxiang Shi,Quanlu Zhang,Fan Yang,Saeed Maleki, Yan Zhu, Cheng Xu,Cheng Li, Mei Yang,Lintao Zhang, Lidong Zhou

arXiv (Cornell University)(2023)

引用 0|浏览9
暂无评分
摘要
With the growing model size, deep neural networks (DNN) are increasingly trained over massive GPU accelerators, which demands a proper parallelization plan that transforms a DNN model into fine-grained tasks and then schedules them to GPUs for execution. Due to the large search space, the contemporary parallelization plan generators often rely on empirical rules that couple transformation and scheduling, and fall short in exploring more flexible schedules that yield better memory usage and compute efficiency. This tension can be exacerbated by the emerging models with increasing complexity in their structure and model size. SuperScaler is a system that facilitates the design and generation of highly flexible parallelization plans. It formulates the plan design and generation into three sequential phases explicitly: model transformation, space-time scheduling, and data dependency preserving. Such a principled approach decouples multiple seemingly intertwined factors and enables the composition of highly flexible parallelization plans. As a result, SuperScaler can not only generate empirical parallelization plans, but also construct new plans that achieve up to 3.5X speedup compared to state-of-the-art solutions like DeepSpeed, Megatron and Alpa, for emerging DNN models like Swin-Transformer and AlphaFold2, as well as well-optimized models like GPT-3.
更多
查看译文
关键词
flexible dnn parallelization,unified
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要