Your Code Secret Belongs to Me: Neural Code Completion Tools Can Memorize Hard-Coded Credentials
arxiv(2023)
摘要
Neural Code Completion Tools (NCCTs) have reshaped the field of software
engineering, which are built upon the language modeling technique and can
accurately suggest contextually relevant code snippets. However, language
models may emit the training data verbatim during inference with appropriate
prompts. This memorization property raises privacy concerns of NCCTs about
hard-coded credential leakage, leading to unauthorized access to applications,
systems, or networks. Therefore, to answer whether NCCTs will emit the
hard-coded credential, we propose an evaluation tool called Hard-coded
Credential Revealer (HCR). HCR constructs test prompts based on GitHub code
files with credentials to reveal the memorization phenomenon of NCCTs. Then,
HCR designs four filters to filter out ill-formatted credentials. Finally, HCR
directly checks the validity of a set of non-sensitive credentials. We apply
HCR to evaluate three representative types of NCCTs: Commercial NCCTs,
open-source models, and chatbots with code completion capability. Our
experimental results show that NCCTs can not only return the precise piece of
their training data but also inadvertently leak additional secret strings.
Notably, two valid credentials were identified during our experiments.
Therefore, HCR raises a severe privacy concern about the potential leakage of
hard-coded credentials in the training data of commercial NCCTs. All artifacts
and data are released for future research purposes in
https://github.com/HCR-Repo/HCR.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要