SFANC: Scalable and Flexible Architecture for Neuromorphic Computing

IEEE Transactions on Very Large Scale Integration (VLSI) Systems(2023)

引用 0|浏览7
暂无评分
摘要
Spiking neural networks (SNNs) are recognized as the third generation of neural networks, boasting remarkable computational capabilities, which also require massive computational resources and flexibility to simulate biological neural functions. In this work, we present SFANC, a scalable and flexible neuromorphic architecture for SNNs based on optimized router architecture of network-on-chip (NoC) and highly programmable neuromorphic cores (NCs). SFANC includes 16 NCs, supporting 8000 neurons and four million synapses. The NCs are based on RISC-V with SNN-specific instructions, providing remarkable flexibility and computational speed. The NoC's spiking routers facilitate multicast routing, flow estimation, and buffer sharing, contributing to increased scalability. Additionally, we propose a spectral cluster mapping approach for efficiently deploying SNNs onto SFANC, ensuring high flexibility and parallelism. We analyze the processing speedup for typical SNN topologies using the leaky-integrate-and-fire (LIF) neuron model within the NCs, achieving up to an 8.6x speedup over the RISC-V core on average under different SNN applications. Our enhanced spiking router demonstrates a spike latency reduction of up to 47.4% under various spiking coding schemes. Moreover, when applied to typical SNN topologies, our method exhibits an average spike latency decrease of up to 32.5% compared to the sequential mapping utilized by SpiNNaker. In summary, this work demonstrates high performance, flexibility, and scalability for simulating and accelerating SNNs, showcasing its potential as a promising solution for neuromorphic architectures.
更多
查看译文
关键词
~Instruction extensions,network-on-chip (NoC),neuromorphic architecture,spiking neural networks (SNNs)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要