谷歌浏览器插件
订阅小程序
在清言上使用

A Non-deterministic Training Approach for Memory-Efficient Stochastic Neural Networks

2023 IEEE 36TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE, SOCC(2023)

引用 0|浏览4
暂无评分
摘要
We proposed a non-deterministic training approach for memory-efficient stochastic computing neural networks (SCNN). Conventional SCNN simply convert the trained network parameters into stochastic bit-streams at the inference phase. Although stochastic bit-streams can simplify binary multiplication and addition using the principle of probability, memory cost and computation delay are consumed due to extremely long bit-streams. Different from methods that rely on long bit-streams to convert full-precision NN to SCNN during inference, the proposed approach also introduces the concept of non-deterministic computation during training to alleviate the memory requirement growth caused by long bit-streams. To this end, we probabilize NN parameters in the feed-forward process of the training phase, and convert them into 1/4/8-bit stochastic number representations according to the probability, which greatly reduces the memory requirements in SC. In order to alleviate the training instability problem caused by low-bit encoding, we propose a multiple parallel training strategy (MPTS) during the training process to improve the stability of the results. The proposed MPTS achieves a stable training process through a voting mechanism. We evaluate the performance of the proposed training approach on fully connected NN and the MNIST dataset. Compared with the baseline training method with 97.77% accuracy using 32-bit floating point values, the proposed non-deterministic training approach achieves a reasonable accuracy of 90.34% while using 4-bit stochastic number representations to represent layer weights and biases.
更多
查看译文
关键词
Non-determinism,Neural Network,Stochastic Encoding,Image Classification,Machine Learning,Voting Mechanism
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要