PPTC-R benchmark: Towards Evaluating the Robustness of Large Language Models for PowerPoint Task Completion
arxiv(2024)
摘要
The growing dependence on Large Language Models (LLMs) for finishing user
instructions necessitates a comprehensive understanding of their robustness to
complex task completion in real-world situations. To address this critical
need, we propose the PowerPoint Task Completion Robustness benchmark (PPTC-R)
to measure LLMs' robustness to the user PPT task instruction and software
version. Specifically, we construct adversarial user instructions by attacking
user instructions at sentence, semantic, and multi-language levels. To assess
the robustness of Language Models to software versions, we vary the number of
provided APIs to simulate both the newest version and earlier version settings.
Subsequently, we test 3 closed-source and 4 open-source LLMs using a benchmark
that incorporates these robustness settings, aiming to evaluate how deviations
impact LLMs' API calls for task completion. We find that GPT-4 exhibits the
highest performance and strong robustness in our benchmark, particularly in the
version update and the multilingual settings. However, we find that all LLMs
lose their robustness when confronted with multiple challenges (e.g.,
multi-turn) simultaneously, leading to significant performance drops. We
further analyze the robustness behavior and error reasons of LLMs in our
benchmark, which provide valuable insights for researchers to understand the
LLM's robustness in task completion and develop more robust LLMs and agents. We
release the code and data at .
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要