CodeFort: Robust Training for Code Generation Models

Yuhao Zhang,Shiqi Wang,Haifeng Qian,Zijian Wang,Mingyue Shang, Linbo Liu,Sanjay Krishna Gouda,Baishakhi Ray, Murali Krishna Ramanathan,Xiaofei Ma, Anoop Deoras

arxiv(2024)

引用 0|浏览5
暂无评分
摘要
Code generation models are not robust to small perturbations, which often lead to inconsistent and incorrect generations and significantly degrade the performance of these models. Improving the robustness of code generation models is crucial to better user experience when these models are deployed in real-world applications. However, existing efforts have not addressed this issue for code generation models. To fill this gap, we propose CodeFort, a framework to improve the robustness of code generation models, generalizing a large variety of code perturbations to enrich the training data and enabling various robust training strategies, mixing data augmentation, batch augmentation, adversarial logits pairing, and contrastive learning, all carefully designed to support high-throughput training. Extensive evaluations show that we improve the average robust pass rates of baseline CodeGen models from 14.79 to 21.74. Notably, the improvement in robustness against code-syntax perturbations is evidenced by a significant decrease in pass rate drop from 95.04
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要