AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators
arxiv(2023)
摘要
Many natural language processing (NLP) tasks rely on labeled data to train
machine learning models with high performance. However, data annotation is
time-consuming and expensive, especially when the task involves a large amount
of data or requires specialized domains. Recently, GPT-3.5 series models have
demonstrated remarkable few-shot and zero-shot ability across various NLP
tasks. In this paper, we first claim that large language models (LLMs), such as
GPT-3.5, can serve as an excellent crowdsourced annotator when provided with
sufficient guidance and demonstrated examples. Accordingly, we propose AnnoLLM,
an annotation system powered by LLMs, which adopts a two-step approach,
explain-then-annotate. Concretely, we first prompt LLMs to provide explanations
for why the specific ground truth answer/label was assigned for a given
example. Then, we construct the few-shot chain-of-thought prompt with the
self-generated explanation and employ it to annotate the unlabeled data with
LLMs. Our experiment results on three tasks, including user input and keyword
relevance assessment, BoolQ, and WiC, demonstrate that AnnoLLM surpasses or
performs on par with crowdsourced annotators. Furthermore, we build the first
conversation-based information retrieval dataset employing AnnoLLM. This
dataset is designed to facilitate the development of retrieval models capable
of retrieving pertinent documents for conversational text. Human evaluation has
validated the dataset's high quality.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要