Integrating Task Specific Information into Pretrained Language Models for Low Resource Fine Tuning.
EMNLP(2020)
摘要
Pretrained Language Models (PLMs) have improved the performance of natural language understanding in recent years. Such models are pretrained on large corpora, which encode the general prior knowledge of natural languages but are agnostic to information characteristic of downstream tasks. This often results in overfitting when fine-tuned with low resource datasets where task-specific information is limited. In this paper, we integrate label information as a task-specific prior into the self-attention component of pretrained BERT models. Experiments on several benchmarks and real-word datasets suggest that the proposed approach can largely improve the performance of pretrained models when fine-tuning with small datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络