Lexical Repetitions Lead to Rote Learning: Unveiling the Impact of Lexical Overlap in Train and Test Reference Summaries.
CoRR(2023)
摘要
Ideal summarization models should generalize to novel summary-worthy content
without remembering reference training summaries by rote. However, a single
average performance score on the entire test set is inadequate in determining
such model competencies. We propose a fine-grained evaluation protocol by
partitioning a test set based on the lexical similarity of reference test
summaries with training summaries. We observe up to a 5x (1.2x) difference in
ROUGE-2 (entity recall) scores between the subsets with the lowest and highest
similarity. Next, we show that such training repetitions also make a model
vulnerable to rote learning, reproducing data artifacts such as factual errors,
especially when reference test summaries are lexically close to training
summaries. Consequently, we propose to limit lexical repetitions in training
summaries during both supervised fine-tuning and likelihood calibration stages
to improve the performance on novel test cases while retaining average
performance. Our automatic and human evaluations on novel test subsets and
recent news articles show that limiting lexical repetitions in training
summaries can prevent rote learning and improve generalization.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要