A Practical Sparse Approximation for Real Time Recurrent Learning

Cited by: 1|Bibtex|Views117
Other Links: arxiv.org

Abstract:

Current methods for training recurrent neural networks are based on backpropagation through time, which requires storing a complete history of network states, and prohibits updating the weights `online' (after every timestep). Real Time Recurrent Learning (RTRL) eliminates the need for history storage and allows for online weight update...More

Code:

Data:

Your rating :
0

 

Tags
Comments