IRCoCo: Immediate Rewards-Guided Deep Reinforcement Learning for Code Completion
CoRR(2024)
摘要
Code completion aims to enhance programming productivity by predicting
potential code based on the current programming context. Recently, pretrained
language models (LMs) have become prominent in this field. Various approaches
have been proposed to fine-tune LMs using supervised fine-tuning (SFT)
techniques for code completion. However, the inherent exposure bias of these
models can cause errors to accumulate early in the sequence completion, leading
to even more errors in subsequent completions. To address this problem, deep
reinforcement learning (DRL) is an alternative technique for fine-tuning LMs
for code completion, which can improve the generalization capabilities and
overall performance. Nevertheless, integrating DRL-based strategies into code
completion faces two major challenges: 1) The dynamic nature of the code
context requires the completion model to quickly adapt to changes, which poses
difficulties for conventional DRL strategies that focus on delayed rewarding of
the final code state. 2) It is difficult to evaluate the correctness of partial
code, thus the reward redistribution-based strategies cannot be adapted to code
completion. To tackle these challenges, we propose IRCoCo, a code
completion-specific DRL-based fine-tuning framework. This framework is designed
to provide immediate rewards as feedback for detecting dynamic context changes
arising from continuous edits during code completion. With the aid of immediate
feedback, the fine-tuned LM can gain a more precise understanding of the
current context, thereby enabling effective adjustment of the LM and optimizing
code completion in a more refined manner. Experimental results demonstrate that
fine-tuning pretrained LMs with IRCoCo leads to significant improvements in the
code completion task, outperforming both SFT-based and other DRL-based
baselines.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要