CamemBERT: a Tasty French Language Model

ACL, pp. 7203-7219, 2020.

Cited by: 31|Views126
EI
Weibo:
We evaluated CamemBERT on four downstream tasks in which our best model reached or improved the state of the art in all tasks considered, even when compared to strong multilingual models such as mBERT, XLM and XLM-R, while having fewer parameters

Abstract:

Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models --in all languages except English-- very limited. Aiming to address this issue ...More
Your rating :
0

 

Tags
Comments