CodeEditorBench: Evaluating Code Editing Capability of Large Language Models
CoRR(2024)
Abstract
Large Language Models (LLMs) for code are rapidly evolving, with code editing
emerging as a critical capability. We introduce CodeEditorBench, an evaluation
framework designed to rigorously assess the performance of LLMs in code editing
tasks, including debugging, translating, polishing, and requirement switching.
Unlike existing benchmarks focusing solely on code generation, CodeEditorBench
emphasizes real-world scenarios and practical aspects of software development.
We curate diverse coding challenges and scenarios from five sources, covering
various programming languages, complexity levels, and editing tasks. Evaluation
of 19 LLMs reveals that closed-source models (particularly Gemini-Ultra and
GPT-4), outperform open-source models in CodeEditorBench, highlighting
differences in model performance based on problem types and prompt
sensitivities. CodeEditorBench aims to catalyze advancements in LLMs by
providing a robust platform for assessing code editing capabilities. We will
release all prompts and datasets to enable the community to expand the dataset
and benchmark emerging LLMs. By introducing CodeEditorBench, we contribute to
the advancement of LLMs in code editing and provide a valuable resource for
researchers and practitioners.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined