谷歌浏览器插件
订阅小程序
在清言上使用

HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models

Guijin Son,Hanwool Lee,Suwan Kim,Hyun-Ah Kim,Jaecheol Lee, Jae-Seung Yeom, Jaewook Jung, Jung Ho Kim, S Kim

arXiv (Cornell University)(2023)

引用 0|浏览2
暂无评分
摘要
Large Language Models (LLMs) trained on massive corpora demonstrate impressive capabilities in a wide range of tasks. While there are ongoing efforts to adapt these models to languages beyond English, the attention given to their evaluation methodologies remains limited. Current multilingual benchmarks often rely on back translations or re-implementations of English tests, limiting their capacity to capture unique cultural and linguistic nuances. To bridge this gap for the Korean language, we introduce HAE-RAE Bench, a dataset curated to challenge models lacking Korean cultural and contextual depth. The dataset encompasses six downstream tasks across four domains: vocabulary, history, general knowledge, and reading comprehension. Contrary to traditional evaluation suites focused on token or sequence classification and specific mathematical or logical reasoning, HAE-RAE Bench emphasizes a model's aptitude for recalling Korean-specific knowledge and cultural contexts. Comparative analysis with prior Korean benchmarks indicates that the HAE-RAE Bench presents a greater challenge to non-native models, by disturbing abilities and knowledge learned from English being transferred.
更多
查看译文
关键词
korean knowledge,language models,hae-rae
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要