IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations
CoRR(2024)
摘要
Current foundation models exhibit impressive capabilities when prompted
either with text only or with both image and text inputs. But do their
capabilities change depending on the input modality? In this work, we propose
IsoBench, a benchmark dataset containing problems from four major
areas: math, science, algorithms, and games. Each example is presented with
multiple isomorphic representations of inputs, such as visual,
textual, and mathematical presentations. IsoBench provides fine-grained
feedback to diagnose performance gaps caused by the form of the representation.
Across various foundation models, we observe that on the same problem, models
have a consistent preference towards textual representations. Most prominently,
when evaluated on all IsoBench problems, Claude-3 Opus performs 28.7 points
worse when provided with images instead of text; similarly, GPT-4 Turbo is 18.7
points worse and Gemini Pro is 14.9 points worse. Finally, we present two
prompting techniques, IsoCombination and IsoScratchPad,
which improve model performance by considering combinations of, and
translations between, different input representations.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要