Instruction Backdoor Attacks Against Customized LLMs
arxiv(2024)
摘要
The increasing demand for customized Large Language Models (LLMs) has led to
the development of solutions like GPTs. These solutions facilitate tailored LLM
creation via natural language prompts without coding. However, the
trustworthiness of third-party custom versions of LLMs remains an essential
concern. In this paper, we propose the first instruction backdoor attacks
against applications integrated with untrusted customized LLMs (e.g., GPTs).
Specifically, these attacks embed the backdoor into the custom version of LLMs
by designing prompts with backdoor instructions, outputting the attacker's
desired result when inputs contain the pre-defined triggers. Our attack
includes 3 levels of attacks: word-level, syntax-level, and semantic-level,
which adopt different types of triggers with progressive stealthiness. We
stress that our attacks do not require fine-tuning or any modification to the
backend LLMs, adhering strictly to GPTs development guidelines. We conduct
extensive experiments on 6 prominent LLMs and 5 benchmark text classification
datasets. The results show that our instruction backdoor attacks achieve the
desired attack performance without compromising utility. Additionally, we
propose two defense strategies and demonstrate their effectiveness in reducing
such attacks. Our findings highlight the vulnerability and the potential risks
of LLM customization such as GPTs.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要