Revisiting Text-to-Image Evaluation with Gecko: On Metrics, Prompts, and Human Ratings
arxiv(2024)
摘要
While text-to-image (T2I) generative models have become ubiquitous, they do
not necessarily generate images that align with a given prompt. While previous
work has evaluated T2I alignment by proposing metrics, benchmarks, and
templates for collecting human judgements, the quality of these components is
not systematically measured. Human-rated prompt sets are generally small and
the reliability of the ratings – and thereby the prompt set used to compare
models – is not evaluated. We address this gap by performing an extensive
study evaluating auto-eval metrics and human templates. We provide three main
contributions: (1) We introduce a comprehensive skills-based benchmark that can
discriminate models across different human templates. This skills-based
benchmark categorises prompts into sub-skills, allowing a practitioner to
pinpoint not only which skills are challenging, but at what level of complexity
a skill becomes challenging. (2) We gather human ratings across four templates
and four T2I models for a total of >100K annotations. This allows us to
understand where differences arise due to inherent ambiguity in the prompt and
where they arise due to differences in metric and model quality. (3) Finally,
we introduce a new QA-based auto-eval metric that is better correlated with
human ratings than existing metrics for our new dataset, across different human
templates, and on TIFA160.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要