ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning
Abstract:
Pre-trained Language Models (PLMs) have shown strong performance in various downstream Natural Language Processing (NLP) tasks. However, PLMs still cannot well capture the factual knowledge in the text, which is crucial for understanding the whole text, especially for document-level language understanding tasks. To address this issue, w...More
Code:
Data:
Tags
Comments