Low Precision Local Learning for Hardware-Friendly Neuromorphic Visual Recognition

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)(2022)

引用 1|浏览11
暂无评分
摘要
Quantization is an important approach in making hardware-friendly implementation. However, while various quantization techniques have been extensively explored in deep learning for reducing the memory and computational footprint of the models, similar investigations are few in neuromorphic computing, which is supposed to demonstrate high power and memory efficiency over its more traditional counterpart. In this work, we explore quantization-aware-training (QAT) for SNNs as well as fully quantized transfer-learning using the DECOLLE learning algorithm as the basis system, whose local loss based learning is bio-plausible, avoids complex back-propagation-through-time and potentially hardware-friendly. We also evaluate different rounding functions, and analyze their effects on learning. We validate our results on two datasets, DVS-Gestures, and N-MNIST, where we reach within 0.3% difference from full precision accuracy for both datasets using only 3-bit weights with a convolutional neural network. We are currently exploring other datasets to understand the generalizability of the explored quantization schemes.
更多
查看译文
关键词
Spiking Neural Network (SNN),quantization,quantization-aware-training,local learning,transfer learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要