A Practical Sparse Approximation for Real Time Recurrent Learning
Abstract:
Current methods for training recurrent neural networks are based on backpropagation through time, which requires storing a complete history of network states, and prohibits updating the weights `online' (after every timestep). Real Time Recurrent Learning (RTRL) eliminates the need for history storage and allows for online weight update...More
Code:
Data:
Tags
Comments