Parameter-Efficient Transfer Learning with Diff Pruning

Demi Guo
Demi Guo
Yoon Kim
Yoon Kim
Cited by: 0|Bibtex|Views15
Other Links: arxiv.org

Abstract:

While task-specific finetuning of pretrained networks has led to significant empirical advances in NLP, the large size of networks makes finetuning difficult to deploy in multi-task, memory-constrained settings. We propose diff pruning as a simple approach to enable parameter-efficient transfer learning within the pretrain-finetune fram...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments