Large Language Models Are State-of-the-Art Evaluator for Grammatical Error Correction
arxiv(2024)
摘要
Large Language Models (LLMs) have been reported to outperform existing
automatic evaluation metrics in some tasks, such as text summarization and
machine translation. However, there has been a lack of research on LLMs as
evaluators in grammatical error correction (GEC). In this study, we investigate
the performance of LLMs in GEC evaluation by employing prompts designed to
incorporate various evaluation criteria inspired by previous research. Our
extensive experimental results demonstrate that GPT-4 achieved Kendall's rank
correlation of 0.662 with human judgments, surpassing all existing methods.
Furthermore, in recent GEC evaluations, we have underscored the significance of
the LLMs scale and particularly emphasized the importance of fluency among
evaluation criteria.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要