Efficiency analysis of artificial vs. Spiking Neural Networks on FPGAs

Journal of Systems Architecture(2022)

引用 1|浏览6
暂无评分
摘要
Artificial Neural Networks (ANNs) incur huge costs in terms of processing power, memory performance, and energy consumption, where in comparison an average human brain operates within a power budget of nearly 20 W. Brain-inspired computing such as Spiking Neural Networks (SNNs) are thus expected to improve efficiency to an unprecedented extent. But apart from the spike coding aspects currently addressed by numerous investigations, research also needs to find solutions for the practical design of future neuromorphic hardware ensuring very low power processing. This paper investigates these questions with a pragmatic comparison of deep Convolutional Neural Networks (CNNs) and their equivalent SNNs based on the implementation and measurement of a set of CNN image classification benchmarks on FPGA devices. Results show that SNNs are clearly less energy efficient than their equivalent CNNs in the general case, further indicating that, on top of ongoing progress in spike modeling theory (e.g. spike encoding, learning), neuromorphic accelerators also have to address important issues in the reality of RTL development and silicon implementation, among which sparsity versus static and idle power consumption, ability to support large levels of parallelism, memory performance, scalability, spiking convolutions.
更多
查看译文
关键词
Neuromorphic accelerator,Convolutional neural networks,NN application,Bioinspired AI,High-level Synthesis,Energy efficiency,FPGA
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要