Kaizen: Practical Self-supervised Continual Learning with Continual Fine-tuning
CoRR(2023)
摘要
Self-supervised learning (SSL) has shown remarkable performance in computer
vision tasks when trained offline. However, in a Continual Learning (CL)
scenario where new data is introduced progressively, models still suffer from
catastrophic forgetting. Retraining a model from scratch to adapt to newly
generated data is time-consuming and inefficient. Previous approaches suggested
re-purposing self-supervised objectives with knowledge distillation to mitigate
forgetting across tasks, assuming that labels from all tasks are available
during fine-tuning. In this paper, we generalize self-supervised continual
learning in a practical setting where available labels can be leveraged in any
step of the SSL process. With an increasing number of continual tasks, this
offers more flexibility in the pre-training and fine-tuning phases. With
Kaizen, we introduce a training architecture that is able to mitigate
catastrophic forgetting for both the feature extractor and classifier with a
carefully designed loss function. By using a set of comprehensive evaluation
metrics reflecting different aspects of continual learning, we demonstrated
that Kaizen significantly outperforms previous SSL models in competitive vision
benchmarks, with up to 16.5
able to balance the trade-off between knowledge retention and learning from new
data with an end-to-end model, paving the way for practical deployment of
continual learning systems.
更多查看译文
关键词
continual learning,self-supervised,fine-tuning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要