Revisiting Meta-evaluation for Grammatical Error Correction
arxiv(2024)
摘要
Metrics are the foundation for automatic evaluation in grammatical error
correction (GEC), with their evaluation of the metrics (meta-evaluation)
relying on their correlation with human judgments. However, conventional
meta-evaluations in English GEC encounter several challenges including biases
caused by inconsistencies in evaluation granularity, and an outdated setup
using classical systems. These problems can lead to misinterpretation of
metrics and potentially hinder the applicability of GEC techniques. To address
these issues, this paper proposes SEEDA, a new dataset for GEC meta-evaluation.
SEEDA consists of corrections with human ratings along two different
granularities: edit-based and sentence-based, covering 12 state-of-the-art
systems including large language models (LLMs), and two human corrections with
different focuses. The results of improved correlations by aligning the
granularity in the sentence-level meta-evaluation, suggest that edit-based
metrics may have been underestimated in existing studies. Furthermore,
correlations of most metrics decrease when changing from classical to neural
systems, indicating that traditional metrics are relatively poor at evaluating
fluently corrected sentences with many edits.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要