Chrome Extension
WeChat Mini Program
Use on ChatGLM

Parameter-Efficient Tuning Makes a Good Classification Head

openalex

Cited 12|Views303
No score
Abstract
In recent years, pretrained models revolutionized the paradigm of natural language understanding (NLU), where we append a randomly initialized classification head after the pretrained backbone, e.g.BERT, and finetune the whole model.As the pretrained backbone makes a major contribution to the improvement, we naturally expect a good pretrained classification head can also benefit the training.However, the final-layer output of the backbone, i.e. the input of the classification head, will change greatly during finetuning, making the usual head-only pretraining (LP-FT) ineffective.In this paper, we find that parameter-efficient tuning makes a good classification head, with which we can simply replace the randomly initialized heads for a stable performance gain.Our experiments demonstrate that the classification head jointly pretrained with parameter-efficient tuning consistently improves the performance on 9 tasks in GLUE and SuperGLUE.
More
Translated text
Key words
Pretrained Models,Transfer Learning,Meta-Learning,Neural Machine Translation,Representation Learning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined