GAP: A novel Generative context-Aware Prompt-tuning method for relation extraction

Expert Systems with Applications(2024)

引用 0|浏览11
暂无评分
摘要
Prompt-tuning was proposed to bridge the gap between pretraining and downstream tasks, and it has achieved promising results in Relation Extraction (RE). Although the existing prompt-based RE methods have outperformed the methods based on fine-tuning paradigm, these methods require domain experts to design prompt templates, making them hard to be generalized. In this paper, we propose a Generative context-Aware Prompt-tuning method (GAP) to address these limitations. Our method consists of three crucial modules: (1) a pretrained prompt generator module that extracts or generates the relation triggers from the context and embeds them into the prompt tokens, (2) an in-domain adaptive pretraining module that further trains the Pretrained Language Models (PLMs) to promote the adaptability of the model, and (3) a joint contrastive loss that prevents PLMs from generating unrelated content and optimizes our model more effectively. We observe that the context-enhanced prompt tokens generated by GAP can better guide PLMs to make more accurate predictions. And the in-domain pretraining can effectively inject domain knowledge to enhance the robustness of the model. We conduct experiments on four public RE datasets with supervised and few-shot settings. The experimental results have demonstrated the superiority of GAP over existing benchmark methods and GAP shows remarkable improvements in few-shot settings, with average F1 score enhancements of 3.5%, 2.7%, and 3.4% on the TACRED, TACREV, and Re-TACRED datasets, respectively. Furthermore, GAP still achieved state-of-the-art (SOTA) performance in supervised settings.
更多
查看译文
关键词
Relation extraction,Prompt-tuning,Pretrained language model,Few-shot learning,Contrastive learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要