Chrome Extension
WeChat Mini Program
Use on ChatGLM

Harnessing Multi-Role Capabilities of Large Language Models for Open-Domain Question Answering

Hongda Sun,Yuxuan Liu, Chengwei Wu, Haiyu Yan, Cheng Tai,Xin Gao,Shuo Shang,Rui Yan

WWW 2024(2024)

Cited 0|Views30
No score
Abstract
Open-domain question answering (ODQA) has emerged as a pivotal researchspotlight in information systems. Existing methods follow two main paradigms tocollect evidence: (1) The retrieve-then-read paradigm retrievespertinent documents from an external corpus; and (2) thegenerate-then-read paradigm employs large language models (LLMs) togenerate relevant documents. However, neither can fully address multifacetedrequirements for evidence. To this end, we propose LLMQA, a generalizedframework that formulates the ODQA process into three basic steps: queryexpansion, document selection, and answer generation, combining the superiorityof both retrieval-based and generation-based evidence. Since LLMs exhibit theirexcellent capabilities to accomplish various tasks, we instruct LLMs to playmultiple roles as generators, rerankers, and evaluators within our framework,integrating them to collaborate in the ODQA process. Furthermore, we introducea novel prompt optimization algorithm to refine role-playing prompts and steerLLMs to produce higher-quality evidence and answers. Extensive experimentalresults on widely used benchmarks (NQ, WebQ, and TriviaQA) demonstrate thatLLMQA achieves the best performance in terms of both answer accuracy andevidence quality, showcasing its potential for advancing ODQA research andapplications.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined