Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions Without the Question?
CoRR(2024)
摘要
Multiple-choice question answering (MCQA) is often used to evaluate large
language models (LLMs). To see if MCQA assesses LLMs as intended, we probe if
LLMs can perform MCQA with choices-only prompts, where models must select the
correct answer only from the choices. In three MCQA datasets and four LLMs,
this prompt bests a majority baseline in 11/12 cases, with up to 0.33 accuracy
gain. To help explain this behavior, we conduct an in-depth, black-box analysis
on memorization, choice dynamics, and question inference. Our key findings are
threefold. First, we find no evidence that the choices-only accuracy stems from
memorization alone. Second, priors over individual choices do not fully explain
choices-only accuracy, hinting that LLMs use the group dynamics of choices.
Third, LLMs have some ability to infer a relevant question from choices, and
surprisingly can sometimes even match the original question. We hope to
motivate the use of stronger baselines in MCQA benchmarks, the design of robust
MCQA datasets, and further efforts to explain LLM decision-making.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要