Tackling Execution-Based Evaluation for NL2Bash
CoRR(2024)
Abstract
Given recent advancement of Large Language Models (LLMs), the task of
translating from natural language prompts to different programming languages
(code generation) attracts immense attention for wide application in different
domains. Specially code generation for Bash (NL2Bash) is widely used to
generate Bash scripts for automating different tasks, such as performance
monitoring, compilation, system administration, system diagnostics, etc.
Besides code generation, validating synthetic code is critical before using
them for any application. Different methods for code validation are proposed,
both direct (execution evaluation) and indirect validations (i.e. exact/partial
match, BLEU score). Among these, Execution-based Evaluation (EE) can validate
the predicted code by comparing the execution output of model prediction and
expected output in system. However, designing and implementing such an
execution-based evaluation system for NL2Bash is not a trivial task. In this
paper, we present a machinery for execution-based evaluation for NL2Bash. We
create a set of 50 prompts to evaluate some popular LLMs for NL2Bash. We also
analyze several advantages and challenges of EE such as syntactically different
yet semantically equivalent Bash scripts generated by different LLMs, or
syntactically correct but semantically incorrect Bash scripts, and how we
capture and process them correctly.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined