FPDeep: Scalable Acceleration of CNN Training on Deeply-Pipelined FPGA Clusters

IEEE Transactions on Computers(2020)

引用 45|浏览127
暂无评分
摘要
Deep convolutional Neural Networks (CNNs) have revolutionized numerous applications, but the demand for ever more performance remains unabated. Scaling CNN computations to larger clusters is generally done by distributing tasks in batch mode using methods such as distributed synchronous SGD. Among the issues with this approach is that, to make the distributed cluster work with high utilization, the workload distributed to each node must be large; this implies nontrivial growth in the SGD mini-batch size. In this article we propose a framework, called FPDeep, which uses a hybrid of model and layer parallelism to configure distributed reconfigurable clusters to train CNNs. This approach has numerous benefits. First, the design does not suffer from performance loss due to batch size growth. Second, work and storage are balanced among nodes through novel workload and weight partitioning schemes. Part of the mechanism is the surprising finding that it is preferable to store excess weights in neighboring devices rather than in local off-chip memory. Third, the entire system is a fine-grained pipeline. This leads to high parallelism and utilization and also minimizes the time that features need to be cached while waiting for back-propagation. As a result, storage demand is reduced to the point where only on-chip memory is used for the convolution layers. And fourth, we find that the simplest topology, a 1D array, is preferred for interconnecting the FPGAs thus enabling widespread applicability. We evaluate FPDeep with the Alexnet, VGG-16, and VGG-19 benchmarks. Results show that FPDeep has good scalability to a large number of FPGAs, with the limiting factor being the FPGA-to-FPGA bandwidth. But with 250 Gb/s bidirectional bandwidth per FPGA, which is easily supported by current generation FPGAs, FPDeep performance shows linearity up to 100 FPGAs. Energy efficiency is evaluated with respect to GOPs/J. FPDeep provides, on average, 6.4× higher energy efficiency than comparable GPU servers.
更多
查看译文
关键词
Training,Field programmable gate arrays,Parallel processing,Engines,Pipelines,Convolution,Bandwidth
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要