Helping Language Models Learn More: Multi-dimensional Task Prompt for Few-shot Tuning

Jinta Weng,Jiarui Zhang,Yue Hu, Daidong Fa, Xiaofeng Xuand,Heyan Huang

2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC)(2023)

引用 0|浏览0
暂无评分
摘要
Large language models (LLMs) can be used as accessible and intelligent chatbots by constructing natural language queries and directly inputting the prompt into the large language model. However, different prompt' constructions often lead to uncertainty in the answers and thus make it hard to utilize the specific knowledge of LLMs (like ChatGPT). To alleviate this, we use an interpretable structure to explain the prompt learning principle in LLMs, which certificates that the effectiveness of language models is determined by position changes of the task's related tokens. Therefore, we propose MTPrompt, a multi-dimensional task prompt learning method consisting based on task-related object, summary, and task description information. By automatically building and searching for appropriate prompts, our proposed MTPrompt achieves the best results on few-shot samples setting and five different datasets. In addition, we demonstrate the effectiveness and stability of our method in different experimental settings and ablation experiments. In interaction with large language models, embedding more task-related information into prompts will make it easier to stimulate knowledge embedded in large language models.
更多
查看译文
关键词
Language Model,Technology Planning Project,Natural Language,Ablation Experiments,Descriptive Summary,Description Task,Objective Description,Task-related Information,Large Datasets,Classification Task,Batch Size,Types Of Tasks,Semantic Similarity,Original Input,Sentiment Analysis,Emotion Categories,Suitable Locations,Text Classification,Few-shot Learning,Pre-trained Language Models,Fine-tuning Process,Fine-tuning Method,Descriptive Metadata,NLP Tasks,Pre-training Process,Zero-shot,Text Classification Tasks,Learning Rate
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要