RepEval: Effective Text Evaluation with LLM Representation
arxiv(2024)
摘要
Automatic evaluation metrics for generated texts play an important role in
the NLG field, especially with the rapid growth of LLMs. However, existing
metrics are often limited to specific scenarios, making it challenging to meet
the evaluation requirements of expanding LLM applications. Therefore, there is
a demand for new, flexible, and effective metrics. In this study, we introduce
RepEval, the first metric leveraging the projection of LLM representations for
evaluation. RepEval requires minimal sample pairs for training, and through
simple prompt modifications, it can easily transition to various tasks. Results
on ten datasets from three tasks demonstrate the high effectiveness of our
method, which exhibits stronger correlations with human judgments compared to
previous metrics, even outperforming GPT-4. Our work underscores the richness
of information regarding text quality embedded within LLM representations,
offering insights for the development of new metrics.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要