Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations

Journal of Machine Learning Research, Volume abs/1609.07061, 2017.

Cited by: 899|Bibtex|Views173
EI
Other Links: dblp.uni-trier.de|academic.microsoft.com|arxiv.org

Abstract:

We introduce a method to train Quantized Neural Networks (QNNs) -- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, an...More

Code:

Data:

Your rating :
0

 

Tags
Comments