Improving Language Plasticity via Pretraining with Active Forgetting
NeurIPS(2023)
摘要
Pretrained language models (PLMs) are today the primary model for natural
language processing. Despite their impressive downstream performance, it can be
difficult to apply PLMs to new languages, a barrier to making their
capabilities universally accessible. While prior work has shown it possible to
address this issue by learning a new embedding layer for the new language,
doing so is both data and compute inefficient. We propose to use an active
forgetting mechanism during pretraining, as a simple way of creating PLMs that
can quickly adapt to new languages. Concretely, by resetting the embedding
layer every K updates during pretraining, we encourage the PLM to improve its
ability of learning new embeddings within a limited number of updates, similar
to a meta-learning effect. Experiments with RoBERTa show that models pretrained
with our forgetting mechanism not only demonstrate faster convergence during
language adaptation but also outperform standard ones in a low-data regime,
particularly for languages that are distant from English.
更多查看译文
关键词
language plasticity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要