SpanBERT: Improving Pre-training by Representing and Predicting Spans

Transactions of the Association for Computational Linguistics, pp. 64-77, 2019.

Cited by: 130|Bibtex|Views104|DOI:https://doi.org/10.1162/tacl_a_00300
Other Links: arxiv.org|academic.microsoft.com

Abstract:

We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random token...

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments