Lifelong Learning人和高级动物在整个生命中不断获取、微调和转让知识和技能。这种能力,称为lifelong learning,是由一系列神经认知机制协调的过程,这些机制共同促进了sensorimotor技能的发展以及对长期记忆的巩固和检索。因此对于计算系统和自动化智体,lifelong learning能力,对能否在现实世界进行交互并处理连续信息,至关重要。但是,长期来看,lifelong/continual learning仍然是机器学习和神经网络模型的挑战,因为从非平稳数据分布中不断递增地获取可用信息通常会导致catastrophic forgetting或者interference问题,即用新信息训练模型的时候会干扰先前学习的知识。这种现象通常会导致性能突然下降,或者最坏的情况下导致旧知识被新知识完全overwrite。对于拿固定训练数据来学习的深度神经网络模型,其随时间递增的信息无法可用这一点,会成为一个主要缺陷。
Humans can learn and accumulate knowledge throughout their whole lives, but artificial neural networks that learn sequential tasks suffer from catastrophic forgetting, in which the learned knowledge is disrupted while a new task is being learned
Cited by0BibtexViews57
0
0
Neural Networks, (2019): 54-71
The most popular deep and shallow learning models of lifelong learning are restricted to the supervised domain, relying on large amounts of annotated data collected in controlled environments
Cited by476BibtexViews134DOI
0
0
Neural Networks, (2019): 56-73
AR1 accuracy was higher than existing regularization approaches such as Learning Without Forgetting, Elastic Weight Consolidation and SI and, from preliminary experiments, compares favorably with rehearsal techniques when the external memory size is not large
Cited by44BibtexViews49DOI
0
0
ICLR, (2019)
In this paper we study the problem of sequential learning using a network with fixed capacity – a prerequisite for a scalable and computationally efficient solution
Cited by38BibtexViews124
0
0
IEEE Transactions on Pattern Analysis and Machine Intelligence, no. 12 (2018): 2935-2947
We propose the Learning without Forgetting method for convolutional neural networks, which can be seen as a hybrid of knowledge distillation and fine-tuning, learning parameters that are discriminative for the new task while preserving outputs for the original tasks on the traini...
Cited by1044BibtexViews241DOI
0
0
european conference on computer vision, (2018)
Given the limited model capacity and the unlimited new information to be learned, knowledge has to be preserved or erased selectively
Cited by289BibtexViews180DOI
0
0
ECCV, (2018)
We address this issue with our approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes
Cited by226BibtexViews147
0
0
Arslan Chaudhry,Puneet Kumar Dokania, Thalaiyasingam Ajanthan,Philip H. S. Torr
ECCV, (2018): 556-572
We introduce two metrics to quantify forgetting and intransigence that allow us to understand, analyse, and gain better insights into the behaviour of incremental learning algorithms
Cited by220BibtexViews98DOI
0
0
Ronald Kemker,Christopher Kanan
international conference on learning representations, (2018)
We propose FearNet for incremental class learning
Cited by142BibtexViews67
0
0
ECCV, pp.452-467, (2018)
Retrospection is proposed to cache a small subset of data for old tasks, which proves to be greatly helpful for the performance preservation, especially in long sequences of tasks drawn from different distributions
Cited by59BibtexViews73
0
0
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), (2018): 3738-3748
We proposed the online Laplace approximation, a Bayesian online learning method for overcoming catastrophic forgetting in neural networks
Cited by50BibtexViews67
0
0
ICLR, (2018)
Experimental results showed state-of-the-art performance when compared to previous continual learning approaches, even though variational continual learning has no free parameters in its objective function
Cited by23BibtexViews53
0
0
Nitin Kamra, Umang Gupta,Yan Liu
arXiv: Learning, (2018)
Despite advances in deep learning, artificial neural networks do not learn the same way as humans do. Today, neural networks can learn multiple tasks when trained on them jointly, but cannot maintain performance on learnt tasks when tasks are presented one at a time -- this pheno...
Cited by6BibtexViews41
0
0
FRONTIERS IN NEUROROBOTICS, (2018)
The proposed architecture can be considered a further step toward more flexible lifelong learning methods that can be deployed in embodied agents for incrementally acquiring and refining knowledge over sustained periods through the active interaction with the environment
Cited by3BibtexViews63DOI
0
0
Proceedings of the National Academy of Sciences of the United States of America, no. 13 (2017)
We propose a dedicated continual learning method for graph neural networks, which is to our best knowledge the first attempt along this line
Cited by1766BibtexViews758DOI
0
0
Sylvestre-Alvise Rebuffi,Alexander Kolesnikov,Christoph H. Lampert
computer vision and pattern recognition, (2017)
We introduce iCaRL, a practical strategy for simultaneously learning classifiers and a feature representation in the class-incremental setting
Cited by555BibtexViews120DOI
0
0
ICML, (2017): 3987-3995
We have shown that the problem of catastrophic forgetting commonly encountered in continual learning scenarios can be alleviated by allowing individual synapses to estimate their importance for solving past tasks
Cited by371BibtexViews93
0
0
CVPR, (2017)
Expert Gate’s autoencoders can distinguish different tasks well as a discriminative classifier trained on all data
Cited by129BibtexViews97
0
0
ICCV, (2017): 1329-1337
Rather than preserving the optimal weights of the previous tasks, we propose an alternative that preserves the features that are crucial for the performance in the corresponding environments
Cited by100BibtexViews100DOI
0
0
CoRL, (2017): 17-26
Continuous/Lifelong learning of high-dimensional data streams is a challenging research problem. In fact, fully retraining models each time new data become available is infeasible, due to computational and storage issues, while naive incremental strategies have been shown to suff...
Cited by13BibtexViews31
0
0
小科