Decoupled Neural Interfaces using Synthetic Gradients
international conference on machine learning, 2017.
EI
Abstract:
Training directed neural networks typically requires forward-propagating data through a computation graph, followed by backpropagating error signal, to produce weight updates. All layers, or more generally, modules, of the network are therefore locked, in the sense that they must wait for the remainder of the network to execute forwards a...More
Code:
Data:
Full Text
Tags
Comments