CodeCoT: Tackling Code Syntax Errors in CoT Reasoning for Code Generation
CoRR(2023)
摘要
Chain-of-thought (CoT) has emerged as a groundbreaking tool in NLP, notably
for its efficacy in complex reasoning tasks, such as mathematical proofs.
However, its application in code generation faces a distinct challenge, i.e.,
although the code generated with CoT reasoning is logically correct, it faces
the problem of syntax error (e.g., invalid syntax error report) during code
execution, which causes the CoT result's pass@1 in HumanEval even lower than
the zero-shot result.
In this paper, we present Code Chain-of-Thought (CodeCoT) that integrates CoT
with a self-examination process for code generation. CodeCoT begins with the
LLMs using CoT for initial code development to ensure the generated code
follows the correct logic flow. Then, CodeCoT will generate test cases to
validate whether the code has syntax errors during the execution. CodeCoT then
employs a self-examination phase, in which the generated code is executed
against these test cases in the local environment. If the local environment
raises error information (e.g., invalid syntax error), CodeCoT will iteratively
refine the code based on the feedback information. Within this loop, CodeCoT
can make sure their generated codes not only follow the logic flow of the code
description, but the syntax error will also be addressed with the
self-examination process. Our evaluation results reveal that CodeCoT improves
the effectiveness of code generation. For example, CodeCoT increases pass@1
from 75.6
更多查看译文
关键词
developer,program,learning,test
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要