ERNIE 2.0: A Continual Pre-training Framework for Language Understanding

Yu Sun
Yu Sun
Shuohuan Wang
Shuohuan Wang
Yukun Li
Yukun Li
Shikun Feng
Shikun Feng
Hao Tian
Hao Tian

national conference on artificial intelligence, 2020.

Cited by: 29|Bibtex|Views45
Other Links: academic.microsoft.com|arxiv.org

Abstract:

Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing. Current pre-training procedures usually focus on training the model with several simple tasks to grasp the co-occurr...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments