PTR: Prompt Tuning with Rules for Text Classification
AI OPEN(2022)
Abstract
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre -trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved promising results on some fewclass classification tasks, such as sentiment classification and natural language inference, manually designing prompts is cumbersome. Meanwhile, generating prompts automatically is also difficult and time-consuming. Therefore, obtaining effective prompts for complex many -class classification tasks still remains a challenge. In this paper, we propose to encode the prior knowledge of a classification task into rules, then design subprompts according to the rules, and finally combine the sub -prompts to handle the task. We name this P rompt T uning method with R ules " PTR ". Compared with existing prompt -based methods, PTR achieves a good tradeoff between effectiveness and efficiency in building prompts. We conduct experiments on three many -class classification tasks, including relation classification, entity typing, and intent classification. The results show that PTR outperforms both vanilla and prompt tuning baselines, indicating the effectiveness of utilizing rules for prompt tuning. The source code of PTR is available at https://github.com/thunlp/PTR.
MoreTranslated text
Key words
Pre-trained language models,Prompt tuning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined