Accelerating Deep Learning by Focusing on the Biggest Losers

Jiang Angela H.
Jiang Angela H.
Wong Daniel L. -K.
Wong Daniel L. -K.
Zhou Giulio
Zhou Giulio
Kaminksy Michael
Kaminksy Michael
Cited by: 0|Bibtex|Views89
Other Links: arxiv.org

Abstract:

This paper introduces Selective-Backprop, a technique that accelerates the training of deep neural networks (DNNs) by prioritizing examples with high loss at each iteration. Selective-Backprop uses the output of a training example's forward pass to decide whether to use that example to compute gradients and update parameters, or to skip...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments