ANP-I: A 28-nm 1.5-pJ/SOP Asynchronous Spiking Neural Network Processor Enabling Sub-0.1- $\mu $ J/Sample On-Chip Learning for Edge-AI Applications

IEEE Journal of Solid-State Circuits(2024)

引用 0|浏览1
暂无评分
摘要
Reducing learning energy consumption is critical to edge-artificial intelligence (AI) processors with on-chip learning since on-chip learning energy dominates energy consumption, especially for applications that require long-term learning. To achieve this goal, we optimize a neuromorphic learning algorithm and propose random target window (TW) selection, hierarchical update skip (HUS), and asynchronous time step acceleration (ATSA) to reduce the on-chip learning power consumption. Our approach results in a 28-nm 1.25-mm $^{2}$ asynchronous neuromorphic processor (ANP-I) with on-chip learning energy per sample less than 15% of inference energy per sample. With all weights randomly initialized, this processor enables on-chip learning for edge-AI tasks such as gesture recognition, keyword spotting, and image classification, consuming sub-0.1 $\mu $ J of learning energy per sample at 0.56 V and 40-MHz frequency while maintaining $>$ 92% accuracy for all tasks.
更多
查看译文
关键词
Application-specified integrated circuit (ASIC),asynchronous circuits,neuromorphic computing,on-chip learning,spiking neural network (SNN)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要