22.6 ANP-I: A 28nm 1.5pJ/SOP Asynchronous Spiking Neural Network Processor Enabling Sub-O.1 μJ/Sample On-Chip Learning for Edge-AI Applications

2023 IEEE International Solid-State Circuits Conference (ISSCC)(2023)

引用 4|浏览6
暂无评分
摘要
With the development of on-chip learning processors for edge-AI applications, energy efficiency of NN inference and training is more and more critical. As on-chip training energy dominates the energy consumption of edge-AI processors [1], [2], [4], [5], reduction is of paramount importance. Spiking neural networks (SNNs) offer energy-efficient inference and learning compared with convolutional neural networks (CNNs) or deepneural networks (DNNs), but SNN-based processors have three challenges that need to be addressed (Fig. 22.6.1). 1) During on-chip training, some factors involved in ΔW computation are zeros resulting in ΔW=O, leading to redundant ΔW computation and memory access for weight update. 2) After reaching a certain accuracy, more data cannot improve the accuracy significantly, and 95% of the energy is wasted on the unnecessary processing of the input spike events afterwards. 3) In the case of sparse input-spike events, the number of spike events in each time step is different. If spike processing is synchronized by time step, the worst-case scenario needs to be considered. As a result, energy and time are wasted.
更多
查看译文
关键词
ANP-I,deep neural networks,edge-AI applications,edge-AI processors,energy consumption,energy efficiency,energy-efficient inference,on-chip learning processors,on-chip training energy,size 28.0 nm,SNN-based processors,sparse input-spike events,spike processing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要