谷歌浏览器插件
订阅小程序
在清言上使用

DynSNN: A Dynamic Approach to Reduce Redundancy in Spiking Neural Networks

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)(2022)

引用 9|浏览15
暂无评分
摘要
Current Internet of Things (IoT) embedded applications use machine learning algorithms to process the collected data. However, the computational complexity and storage requirements of existing deep learning methods hinder the wide availability of embedded applications. Spiking Neural Networks (SNN) is a brain-inspired learning methodology that emerged from theoretical neuroscience, as an alternative computing paradigm for enabling low-power computation. Since these IoT devices are usually resource-constrained, compression techniques are crucial in the practical application of SNNs. Most existing methods directly apply pruning methods from artificial neural networks (ANNs) to SNNs, while ignoring the distinction between ANNs and SNNs, thus inhibiting the potential of pruning methods on SNNs. In this paper, inspired by the topology of neuronal co-activity in the neural system, we propose a dynamic pruning framework (dubbed DynSNN) for SNNs, enabling us to seamlessly optimize network topology on the fly almost without accuracy loss. Experimental results on a wide range of classification applications show that the proposed method achieves almost lossless for SNN on MNIST, CIFAR-10, and ImageNet datasets. Moreover, it reaches a similar to 0.3% accuracy loss under 34% compression rate on CIFAR and ImageNet, and achieves 60% compression rate with no accuracy loss on MNIST, which reveals remarkable structure refining capability in SNNs.
更多
查看译文
关键词
Spiking Neural Network,Dynamic Network,Accuracy,Edge Devices
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要