HOYER REGULARIZER IS ALL YOU NEED FOR EXTREMELY SPARSE SPIKING NEURAL NETWORKS
ICLR 2023(2023)
摘要
Spiking Neural networks (SNN) have emerged as an attractive spatio-temporal
computing paradigm for a wide range of low-power vision tasks. However, state-
of-the-art (SOTA) SNN models either incur multiple time steps which hinder their
deployment in real-time use cases or increase the training complexity significantly.
To mitigate this concern, we present a training framework (from scratch) for one-
time-step SNNs that uses a novel variant of the recently proposed Hoyer regularizer. We estimate the threshold of each SNN layer as the Hoyer extremum of a
clipped version of its activation map, where the clipping threshold is trained using
gradient descent with our Hoyer regularizer. This approach not only downscales
the value of the trainable threshold, thereby emitting a large number of spikes for
weight update with limited number of iterations (due to only one time step) but
also shifts the pre-activation values away from the threshold, thereby mitigating
the effect of noise that can degrade the SNN accuracy. Our approach outperforms
existing spiking, binary, and adder neural networks in terms of accuracy-FLOPs
trade-off for complex image recognition tasks. Downstream experiments on object detection also demonstrate the efficacy of our approach.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要