APRE: Annotation-Aware Prompt-Tuning for Relation Extraction
Neural processing letters/Neural Processing Letters(2024)
Abstract
Prompt-tuning has been successfully applied to support classification tasks in natural language processing and has achieved promising performance. The main characteristic of prompt-tuning based classification is to verbalize class labels and predict masked tokens like a cloze-like task. It has the advantage to make use of knowledge in pre-trained language models (PLMs). Because prompt templates are manually designed, they are more prone to overfitting. Furthermore, traditional prompt templates are appended in the tail of an original sentence. They are far from some semantic units in a sentence. It is weak to decode semantic information of an input relevant to PLMs. To aggregate more semantic information from PLMs for masked token prediction, we propose an annotation-aware prompt-tuning model for relation extraction. In our method, entity type representations are used as entity annotations. They are implanted near the site of entities in a sentence for decoding semantic information of PLMs. It is effective to make full use of knowledge in PLMs for relation extraction. In the experiment section, our method is validated on the Chinese literature text and SemEval 2010 Task datasets and achieves 89.3% and 90.6% in terms of F1-score, respectively. It achieves the state-of-the-art performance on two public datasets. The result further demonstrates the effectiveness of our model to decode semantic information in PLMs.
MoreTranslated text
Key words
Relation extraction,Prompt tuning,Annotation,Semantic information
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined