Multi-LLM Collaboration + Data-Centric Innovation = 2x Better Vulnerability Repair
arxiv(2024)
摘要
The advances of deep learning (DL) have paved the way for automatic software
vulnerability repair approaches, which effectively learn the mapping from the
vulnerable code to the fixed code. Nevertheless, existing DL-based
vulnerability repair methods face notable limitations: 1) they struggle to
handle lengthy vulnerable code, 2) they treat code as natural language texts,
neglecting its inherent structure, and 3) they do not tap into the valuable
expert knowledge present in the expert system.
To address this, we propose VulMaster, a Transformer-based neural network
model that excels at generating vulnerability repairs through data-centric
innovation. Specifically, VulMaster introduces the utilization and combination
of various types of input data, including complete vulnerable code of any size,
vulnerable code structures, and expert knowledge from the CWE system.
Additionally, VulMaster leverages the collaboration between two Large Language
Models (LLMs), CodeT5 and ChatGPT: CodeT5 acts as the customizable backbone
LLM, fine-tuned with the training data, while ChatGPT supplements by providing
missing relevant inputs to CodeT5. We evaluated VulMaster on a real-world C/C++
vulnerability repair dataset comprising 1,754 projects with 5,800 vulnerable
functions. The experimental results demonstrated that VulMaster exhibits
substantial improvements compared to the learning-based state-of-the-art
vulnerability repair approach. Specifically, VulMaster improves the EM, BLEU,
and CodeBLEU scores from 10.2% to 20.0%, 21.3% to 29.3%, and 32.5% to
40.9%, respectively.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要