Human Behavioral Benchmarking: Numeric Magnitude Comparison Effects in Large Language Models
conf_acl(2023)
摘要
Large Language Models (LLMs) do not differentially represent numbers, which
are pervasive in text. In contrast, neuroscience research has identified
distinct neural representations for numbers and words. In this work, we
investigate how well popular LLMs capture the magnitudes of numbers (e.g., that
4 < 5) from a behavioral lens. Prior research on the representational
capabilities of LLMs evaluates whether they show human-level performance, for
instance, high overall accuracy on standard benchmarks. Here, we ask a
different question, one inspired by cognitive science: How closely do the
number representations of LLMscorrespond to those of human language users, who
typically demonstrate the distance, size, and ratio effects? We depend on a
linking hypothesis to map the similarities among the model embeddings of number
words and digits to human response times. The results reveal surprisingly
human-like representations across language models of different architectures,
despite the absence of the neural circuitry that directly supports these
representations in the human brain. This research shows the utility of
understanding LLMs using behavioral benchmarks and points the way to future
work on the number representations of LLMs and their cognitive plausibility.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要