REINFOREST: Reinforcing Semantic Code Similarity for Cross-Lingual Code Search Models
arxiv(2023)
摘要
This paper introduces a novel code-to-code search technique that enhances the
performance of Large Language Models (LLMs) by including both static and
dynamic features as well as utilizing both similar and dissimilar examples
during training. We present the first-ever code search method that encodes
dynamic runtime information during training without the need to execute either
the corpus under search or the search query at inference time and the first
code search technique that trains on both positive and negative reference
samples. To validate the efficacy of our approach, we perform a set of studies
demonstrating the capability of enhanced LLMs to perform cross-language
code-to-code search. Our evaluation demonstrates that the effectiveness of our
approach is consistent across various model architectures and programming
languages. We outperform the state-of-the-art cross-language search tool by up
to 44.7%. Moreover, our ablation studies reveal that even a single positive
and negative reference sample in the training process results in substantial
performance improvements demonstrating both similar and dissimilar references
are important parts of code search. Importantly, we show that enhanced
well-crafted, fine-tuned models consistently outperform enhanced larger modern
LLMs without fine tuning, even when enhancing the largest available LLMs
highlighting the importance for open-sourced models. To ensure the
reproducibility and extensibility of our research, we present an open-sourced
implementation of our tool and training procedures called REINFOREST.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要