Accelerating Block-Circulant Matrix-Based Neural Network Layer on a General Purpose Computing Platform: A Design Guideline

ADVANCES IN INFORMATION AND COMMUNICATION, VOL 2(2020)

引用 0|浏览9
暂无评分
摘要
Deep neural networks (DNNs) have become a powerful tool and enabled the state-of-the art accuracy on many challenging tasks. However, large-scale DNNs highly consume both computational time and storage space. To optimize and improve the performance of the network while maintaining the accuracy, the block-circulant matrix-based (BCM) algorithm has been introduced. BCM utilizes the Fast Fourier Transform (FFT) with block-circulant matrices to compute the output of each layer of the network. Unlike conventional pruning techniques, the network structure is maintained while using the BCM. Compared to conventional matrix implementation, the BCM reduces the computational complexity of a neural network layer from O(n^2) to O(n^2/k), and it has been proven to be highly effective when implemented using customized hardware, such as FPGAs. However, its performance suffers from overhead of FFT and matrix reshaping on general purpose computing platforms. In certain cases, using the BCM does not improve the total computation time of the networks at all. In this paper, we propose a parallel implementation of the BCM layer and guidelines that generally lead to better implementation practice is provided. The guidelines run across popular implementation language and packages including Python, numpy, intel-numpy, tensorflow, and nGraph.
更多
查看译文
关键词
Block-circulant matrix,Deep learning,Acceleration,Parallel computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要