Adaptively Truncating Backpropagation Through Time to Control Gradient Bias

UAI, pp. 2902019.

Cited by: 2|Views29
EI

Abstract:

Truncated backpropagation through time (TBPTT) is a popular method for learning in recurrent neural networks (RNNs) that saves computation and memory at the cost of bias by truncating backpropagation after a fixed number of lags. In practice, choosing the optimal truncation length is difficult: TBPTT will not converge if the truncation ...More

Code:

Data:

Your rating :
0

 

Tags
Comments