I / O Bottleneck Investigation in Deep Learning Systems

semanticscholar(2018)

引用 0|浏览2
暂无评分
摘要
As deep learning systems continue to grow in importance, several researchers have been analyzing approaches to make such systems efficient and scalable on high-performance computing platforms. As computational parallelism increases, however, data I/O becomes the major bottleneck limiting the overall system scalability. In this paper, we present a detailed analysis of the performance bottlenecks of Caffe on large supercomputing systems and propose an optimized I/O plugin, namely LMDBIO. Throughout the paper, we present six sophisticated data I/O techniques that address five major problems in state of the art I/O subsystem. Our experimental results show that Caffe/LMDBIO can improve the overall execution time of Caffe/LMDB by 77-fold compared with LMDB.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要