A Spiking LSTM Accelerator for Automatic Speech Recognition Application Based on FPGA

Tingting Yin, Feihong Dong,Chao Chen, Chenghao Ouyang,Zheng Wang,Yongkui Yang

ELECTRONICS(2024)

引用 0|浏览27
暂无评分
摘要
Long Short-Term Memory (LSTM) finds extensive application in sequential learning tasks, notably in speech recognition. However, existing accelerators tailored for traditional LSTM networks grapple with high power consumption, primarily due to the intensive matrix-vector multiplication operations inherent to LSTM networks. In contrast, the spiking LSTM network has been designed to avoid these multiplication operations by replacing multiplication and nonlinear functions with addition and comparison. In this paper, we present an FPGA-based accelerator specifically designed for spiking LSTM networks. Firstly, we employ a low-cost circuit in the LSTM gate to significantly reduce power consumption and hardware cost. Secondly, we propose a serial-parallel processing architecture along with hardware implementation to reduce inference latency. Thirdly, we quantize and efficiently deploy the synapses of the spiking LSTM network. The power consumption of the accelerator implemented on Artix-7 and Zynq-7000 is only about 1.1 W and 0.84 W, respectively, when performing the inference for speech recognition with the Free Spoken Digit Dataset (FSDD). Additionally, the energy consumed per inference is remarkably efficient, with values of 87 mu J and 66 mu J, respectively. In comparison with dedicated accelerators designed for traditional LSTM networks, our spiking LSTM accelerator achieves a remarkable reduction in power consumption, amounting to orders of magnitude.
更多
查看译文
关键词
spiking LSTM,spiking neural networks,hardware acceleration,FPGA,automatic speech recognition (ASR)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要