Semantically-Shifted Incremental Adapter-Tuning is A Continual ViTransformer
CVPR 2024(2024)
摘要
Class-incremental learning (CIL) aims to enable models to continuously learn
new classes while overcoming catastrophic forgetting. The introduction of
pre-trained models has brought new tuning paradigms to CIL. In this paper, we
revisit different parameter-efficient tuning (PET) methods within the context
of continual learning. We observe that adapter tuning demonstrates superiority
over prompt-based methods, even without parameter expansion in each learning
session. Motivated by this, we propose incrementally tuning the shared adapter
without imposing parameter update constraints, enhancing the learning capacity
of the backbone. Additionally, we employ feature sampling from stored
prototypes to retrain a unified classifier, further improving its performance.
We estimate the semantic shift of old prototypes without access to past samples
and update stored prototypes session by session. Our proposed method eliminates
model expansion and avoids retaining any image samples. It surpasses previous
pre-trained model-based CIL methods and demonstrates remarkable continual
learning capabilities. Experimental results on five CIL benchmarks validate the
effectiveness of our approach, achieving state-of-the-art (SOTA) performance.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要