Minibatch Processing For Speed-Up And Scalability Of Spiking Neural Network Simulation

2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2020)

引用 2|浏览21
暂无评分
摘要
Spiking neural networks (SNNs) are a promising candidate for biologically-inspired and energy efficient computation. However, their simulation is restrictively time consuming, and creates a bottleneck in developing competitive training methods with potential deployment on neuromorphic hardware platforms, even on simple tasks. To address this issue, we provide an implementation of mini-batch processing applied to clock-based SNN simulation, leading to drastically increased data throughput. To our knowledge, this is the first general-purpose implementation of mini-batch processing in a spiking neural networks simulator, which works with arbitrary neuron and synapse models. We demonstrate nearly constant-time scaling with batch size on a simulation setup (up to GPU memory limits), and showcase the effectiveness of large batch sizes in two SNN application domains, resulting in similar to 880X and similar to 24X reductions in wall-clock time respectively. Different parameter reduction techniques are shown to produce different learning outcomes in a simulation of networks trained with spike-timing-dependent plasticity. Machine learning practitioners and biological modelers alike may benefit from the drastically reduced simulation time and increased iteration speed this method enables.
更多
查看译文
关键词
SNN application domains,wall-clock time,spike-timing-dependent plasticity,biological modelers,minibatch processing,spiking neural network simulation,energy efficient computation,competitive training methods,neuromorphic hardware platforms,mini-batch processing,clock-based SNN simulation,drastically increased data throughput,arbitrary neuron,synapse models,machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要