Training deep neural networks with low precision multiplications

2014.

Cited by: 86|Bibtex|Views100
Other Links: academic.microsoft.com|arxiv.org

Abstract:

Multipliers are the most space and power-hungry arithmetic operators of the digital implementation of deep neural networks. We train a set of state-of-the-art neural networks (Maxout networks) on three benchmark datasets: MNIST, CIFAR-10 and SVHN. They are trained with three distinct formats: floating point, fixed point and dynamic fixe...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments