ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code
arxiv(2023)
Abstract
Despite Large Language Models (LLMs) like GPT-4 achieving impressive results
in function-level code generation, they struggle with repository-scale code
understanding (e.g., coming up with the right arguments for calling routines),
requiring a deeper comprehension of complex file interactions. Also, recently,
people have developed LLM agents that attempt to interact with repository code
(e.g., compiling and evaluating its execution), prompting the need to evaluate
their performance. These gaps have motivated our development of ML-Bench, a
benchmark rooted in real-world programming applications that leverage existing
code repositories to perform tasks. Addressing the need for LLMs to interpret
long code contexts and translate instructions into precise, executable scripts,
ML-Bench encompasses annotated 9,641 examples across 18 GitHub repositories,
challenging LLMs to accommodate user-specified arguments and documentation
intricacies effectively. To evaluate both LLMs and AI agents, two setups are
employed: ML-LLM-Bench for assessing LLMs' text-to-code conversion within a
predefined deployment environment, and ML-Agent-Bench for testing autonomous
agents in an end-to-end task execution within a Linux sandbox environment. Our
findings indicate that while GPT-4o leads with a Pass@5 rate surpassing 50
there remains significant scope for improvement, highlighted by issues such as
hallucinated outputs and difficulties with bash script generation. Notably, in
the more demanding ML-Agent-Bench, GPT-4o achieves a 76.47
reflecting the efficacy of iterative action and feedback in complex task
resolution.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined