EasyScale: Accuracy-consistent Elastic Training for Deep Learning

arxiv(2022)

引用 0|浏览74
暂无评分
摘要
Distributed synchronized GPU training is commonly used for deep learning. The resource constraint of using fixed GPUs makes large-scale deep learning training jobs suffer, and also lowers the cluster utilization. However, incorporating resource elasticity often introduces non-determinism in model accuracy, which is mainly due to the lack of capability to isolate the model training procedure from hardware resources. We introduce EasyScale, an elastic framework that scales distributed training on heterogeneous GPUs while producing deterministic deep learning models. EasyScale follows the data-parallel training flow strictly, traces the accuracy-relevant factors carefully, utilizes the deep learning characteristics for context switching efficiently, thus achieving elastic accuracy-consistent model training. To saturate the computation capability of heterogeneous GPUs, EasyScale dynamically assigns workers based on our intra-job and inter-job scheduling policies, minimizing GPU idle time and maximizing aggregated job throughput accordingly. Deployed in an online serving cluster of CompanyA, EasyScale powers elastic deep learning training jobs to utilize free GPUs opportunistically, improving the overall cluster utilization by 62.1% without violating SLA.
更多
查看译文
关键词
deep learning,training,accuracy-consistent
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要