What is Happening Inside a Continual Learning Model? - A Representation-Based Evaluation of Representational Forgetting -.
CVPR Workshops(2020)
摘要
Recently, many continual learning methods have been proposed, and their performance is usually evaluated based on their final output such as the class they predicted. However, this output-based evaluation cannot tell us anything about how representations the model learned from given tasks are forgotten during learning process inside the model although understanding it is important to devise a robust algorithm to catastrophic forgetting that is an intrinsic problem in continual learning. In this work, we propose a representation-based evaluation framework and demonstrate it can help us better understand the representational forgetting through intensive experiments on three benchmark datasets, which eventually brought us the following findings: 1) non-negligible amount of representational forgetting appears at shallow layers of a deep neural network model, and 2) which tasks are more accurately learned when representational forgetting occurred depends on the depth of the layer at which the representational forgetting is observed.
更多查看译文
关键词
continual learning model,representational forgetting,output-based evaluation,learning process,catastrophic forgetting,representation-based evaluation framework,deep neural network model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络