LHMKE: A Large-scale Holistic Multi-subject Knowledge Evaluation Benchmark for Chinese Large Language Models
CoRR(2024)
摘要
Chinese Large Language Models (LLMs) have recently demonstrated impressive
capabilities across various NLP benchmarks and real-world applications.
However, the existing benchmarks for comprehensively evaluating these LLMs are
still insufficient, particularly in terms of measuring knowledge that LLMs
capture. Current datasets collect questions from Chinese examinations across
different subjects and educational levels to address this issue. Yet, these
benchmarks primarily focus on objective questions such as multiple-choice
questions, leading to a lack of diversity in question types. To tackle this
problem, we propose LHMKE, a Large-scale, Holistic, and Multi-subject Knowledge
Evaluation benchmark in this paper. LHMKE is designed to provide a
comprehensive evaluation of the knowledge acquisition capabilities of Chinese
LLMs. It encompasses 10,465 questions across 75 tasks covering 30 subjects,
ranging from primary school to professional certification exams. Notably, LHMKE
includes both objective and subjective questions, offering a more holistic
evaluation of the knowledge level of LLMs. We have assessed 11 Chinese LLMs
under the zero-shot setting, which aligns with real examinations, and compared
their performance across different subjects. We also conduct an in-depth
analysis to check whether GPT-4 can automatically score subjective predictions.
Our findings suggest that LHMKE is a challenging and advanced testbed for
Chinese LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要