Neural Networks with Few Multiplications

international conference on learning representations, 2015.

Cited by: 227|Bibtex|Views86
EI
Other Links: academic.microsoft.com|dblp.uni-trier.de|arxiv.org

Abstract:

For most deep learning algorithms training is notoriously time consuming. Since most of the computation in training neural networks is typically spent on floating point multiplications, we investigate an approach to training that eliminates the need for most of these. Our method consists of two parts: First we stochastically binarize we...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments