Workshop 19 - ScaDL Scalable Deep Learning over Parallel and Distributed Infrastructures.

IPDPS Workshops(2020)

引用 0|浏览25
暂无评分
摘要
It is our great pleasure to welcome you to the second edition of the workshop on Scalable Deep Learning over Parallel and Distributed Infrastructure (ScaDL)! Recently, Deep Learning (DL) has received tremendous attention in the research community because of impressive results obtained for a large number of machine learning problems. The success of state-of-the-art deep learning systems relies on training deep neural networks over massive amounts of training data, which typically requires large-scale distributed computing infrastructure to run. It demands advancement along multiple research directions such as model/data parallelism, model/data compression, distributed optimization algorithms for DL convergence, synchronization strategies, efficient communication and specific hardware acceleration. This intersection of distributed/parallel computing and deep learning is thus becoming critical; this workshop aims to bring these two communities together to foster collaboration, discuss relevant topics and share results. In addition to five peer-reviewed research papers, ScaDL 2020 also features four invited presentations, from Dr. Manish Gupta (Google Research, India), Prof. Geoffrey Fox (Indiana University, USA), Prof. Wen-mei Hwu (UIUC, USA) and Dr. Minsik Cho (IBM, USA). We sincerely thank our invited speakers for their time and efforts.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要