ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning

Cited by: 0|Views35

Abstract:

Pre-trained Language Models (PLMs) have shown strong performance in various downstream Natural Language Processing (NLP) tasks. However, PLMs still cannot well capture the factual knowledge in the text, which is crucial for understanding the whole text, especially for document-level language understanding tasks. To address this issue, w...More

Code:

Data:

Full Text
Bibtex
Your rating :
0

 

Tags
Comments