谷歌浏览器插件
订阅小程序
在清言上使用

CQ+ Training: Minimizing Accuracy Loss in Conversion From Convolutional Neural Networks to Spiking Neural Networks

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE(2023)

引用 0|浏览25
暂无评分
摘要
Spiking neural networks (SNNs) are attractive for energy-constrained use-cases due to their binarized activation, eliminating the need for weight multiplication. However, its lag in accuracy compared to traditional convolutional network networks (CNNs) has limited its deployment. In this paper, we propose CQ+ training (extended "clamped" and "quantized" training), an SNN-compatible CNN training algorithm that achieves state-of-the-art accuracy for both CIFAR-10 and CIFAR-100 datasets. Using a 7-layer modified VGG model (VGG-*), we achieved 95.06% accuracy on the CIFAR-10 dataset for equivalent SNNs. The accuracy drop from converting the CNN solution to an SNN is only 0.09% when using a time step of 600. To reduce the latency, we propose a parameterized input encoding method and a threshold training method, which further reduces the time window size to 64 while still achieving an accuracy of 94.09%. For the CIFAR-100 dataset, we achieved an accuracy of 77.27% using the same VGG-* structure and a time window of 500. We also demonstrate the transformation of popular CNNs, including ResNet (basic, bottleneck, and shortcut block), MobileNet v1/2, and Densenet, to SNNs with near-zero conversion accuracy loss and a time window size smaller than 60. The framework was developed in PyTorch and is publicly available.
更多
查看译文
关键词
Training,Neurons,Quantization (signal),Convolutional neural networks,Backpropagation,Clamps,Encoding,CNN-to-SNN Conversion,deep spiking neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要