Alleviating Representational Shift for Continual Fine-tuning.

IEEE Conference on Computer Vision and Pattern Recognition(2022)

Cited 7|Views33
No score
Abstract
We study a practical setting of continual learning: fine-tuning on a pre-trained model continually. Previous work has found that, when training on new tasks, the features (penultimate layer representations) of previous data will change, called representational shift. Besides the shift of features, we reveal that the intermediate layers' representational shift (IRS) also matters since it disrupts batch normalization, which is another crucial cause of catastrophic forgetting. Motivated by this, we propose ConFiT, a fine-tuning method incorporating two components, cross-convolution batch normalization (Xconv BN) and hierarchical fine-tuning. Xconv BN maintains pre-convolution running means instead of post-convolution, and recovers post-convolution ones before testing, which corrects the inaccurate estimates of means under IRS. Hierarchical fine-tuning leverages a multi-stage strategy to fine-tune the pre-trained network, preventing massive changes in Conv layers and thus alleviating IRS. Experimental results on four datasets show that our method remarkably outperforms several state-of-the-art methods with lower storage overhead.
More
Translated text
Key words
continual fine-tuning,practical setting,continual learning,penultimate layer representations,intermediate layers,IRS,batch normalization,fine tuning method,Xconv BN,hierarchical fine tuning,pre-trained network,Conv layers,fine tuning leverages,preconvolution running means,representational shift
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined