Maintaining Plasticity in Deep Continual Learning
arxiv(2023)
摘要
Modern deep-learning systems are specialized to problem settings in which
training occurs once and then never again, as opposed to continual-learning
settings in which training occurs continually. If deep-learning systems are
applied in a continual learning setting, then it is well known that they may
fail to remember earlier examples. More fundamental, but less well known, is
that they may also lose their ability to learn on new examples, a phenomenon
called loss of plasticity. We provide direct demonstrations of loss of
plasticity using the MNIST and ImageNet datasets repurposed for continual
learning as sequences of tasks. In ImageNet, binary classification performance
dropped from 89
linear network, on the 2000th task. Loss of plasticity occurred with a wide
range of deep network architectures, optimizers, activation functions, batch
normalization, dropout, but was substantially eased by L2-regularization,
particularly when combined with weight perturbation. Further, we introduce a
new algorithm – continual backpropagation – which slightly modifies
conventional backpropagation to reinitialize a small fraction of less-used
units after each example and appears to maintain plasticity indefinitely.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要