Sparse ReRAM engine: joint exploration of activation and weight sparsity in compressed neural networks

Tzu-Hsien Yang
Tzu-Hsien Yang
I-Ching Tseng
I-Ching Tseng
Han-Wen Hu
Han-Wen Hu

Proceedings of the 46th International Symposium on Computer Architecture, pp. 236-249, 2019.

Cited by: 7|Bibtex|Views14|DOI:https://doi.org/10.1145/3307650.3322271
EI
Other Links: dl.acm.org|dblp.uni-trier.de|academic.microsoft.com

Abstract:

Exploiting model sparsity to reduce ineffectual computation is a commonly used approach to achieve energy efficiency for DNN inference accelerators. However, due to the tightly coupled crossbar structure, exploiting sparsity for ReRAM-based NN accelerator is a less explored area. Existing architectural studies on ReRAM-based NN accelerato...More

Code:

Data:

Your rating :
0

 

Tags
Comments