KPT plus plus : Refined knowledgeable prompt tuning for few-shot text classification

KNOWLEDGE-BASED SYSTEMS(2023)

引用 0|浏览3
暂无评分
摘要
Recently, the new paradigm "pre-train, prompt, and predict"has achieved remarkable few-shot learning achievements compared with the "pre-train, fine-tune"paradigm. Prompt-tuning inserts the prompt text into the input and converts the classification task into a masked language modeling task. One of the key steps is to build a projection between the labels and the label words, i.e., the verbalizer. Knowledgeable prompt-tuning (KPT), which integrates external knowledge into the verbalizer to improve and stabilize prompt-tuning. KPT uses word embeddings and various knowledge graphs to expand the label words space to hundreds of words per class. However, some unreasonable label words in the verbalizer may damage the accuracy. In this paper, a new method called KPT++ is proposed to improve the few-shot text classification. KPT++ is refined knowledgeable prompt-tuning, which can also be regarded as an upgraded version of KPT. Specifically, KPT++ uses two newly proposed prompt grammar refinement (PGR) and probability distribution refinement (PDR) to refine the knowledgeable verbalizer. Extensive experiments on few-shot text classification tasks demonstrate that our KPT++ outperforms state-of-the-art method KPT and other baseline methods. Furthermore, ablation experiments and case studies demonstrate the effectiveness of both PGR and PDR refining methods.& COPY; 2023 Elsevier B.V. All rights reserved.
更多
查看译文
关键词
Natural language processing,Prompt tuning,Few-shot learning,Text classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要