SpanBERT: Improving Pre-training by Representing and Predicting Spans
Transactions of the Association for Computational Linguistics, pp. 64-77, 2019.
Abstract:
We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random token...
Code:
Data:
Full Text
Tags
Comments