Overcoming catastrophic forgetting in neural networks

Proceedings of the National Academy of Sciences of the United States of America, Volume abs/1612.00796, Issue 13, 2017.

Cited by: 316|Bibtex|Views147|Links
EI
Keywords:
artificial intelligencecontinual learningdeep learningstability plasticitysynaptic consolidation

Abstract:

The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train...More

Code:

Data:

Your rating :
0

 

Tags
Comments