Training deep neural networks with low precision multiplications
2014.
Abstract:
Multipliers are the most space and power-hungry arithmetic operators of the digital implementation of deep neural networks. We train a set of state-of-the-art neural networks (Maxout networks) on three benchmark datasets: MNIST, CIFAR-10 and SVHN. They are trained with three distinct formats: floating point, fixed point and dynamic fixe...More
Code:
Data:
Full Text
Tags
Comments