谷歌浏览器插件
订阅小程序
在清言上使用

Optimizing Multi-GPU Parallelization Strategies for Deep Learning Training

IEEE MICRO/IEEE micro(2019)

引用 37|浏览1
暂无评分
摘要
Deploying deep learning (DL) models across multiple compute devices to train large and complex models continues to grow in importance because of the demand for faster and more frequent training. Data parallelism (DP) is the most widely used parallelization strategy, but as the number of devices in data parallel training grows, so does the communication overhead between devices. Additionally, a larger aggregate batch size per step leads to statistical efficiency loss, i.e., a larger number of epochs are required to converge to a desired accuracy. These factors affect overall training time and beyond a certain number of devices, the speedup from DP scales poorly. This work explores hybrid parallelization, where each data parallel worker comprises more than one device to accelerate each training step by exploiting model parallelism. We show that at scale, hybrid training will be more effective at minimizing end-to-end training time than exploiting DP alone. We project that, for Inception-V8, GNMT, and BigLSTM, the hybrid strategy provides an end-to-end training speedup of at least 26.5%, 8%, and 22%, respectively, compared to what DP alone can achieve at scale.
更多
查看译文
关键词
Training data,Parallel processing,Data models,Machine learning,Computational modeling,Performance evaluation,Scalability,Deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要