Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks

CoRR(2023)

Cited 32|Views127
No score
Abstract
Recent advances in instruction-following large language models (LLMs) have led to dramatic improvements in a range of NLP tasks. Unfortunately, we find that the same improved capabilities amplify the dual-use risks for malicious purposes of these models. Dual-use is difficult to prevent as instruction-following capabilities now enable standard attacks from computer security. The capabilities of these instruction-following LLMs provide strong economic incentives for dual-use by malicious actors. In particular, we show that instruction-following LLMs can produce targeted malicious content, including hate speech and scams, bypassing in-the-wild defenses implemented by LLM API vendors. Our analysis shows that this content can be generated economically and at cost likely lower than with human effort alone. Together, our findings suggest that LLMs will increasingly attract more sophisticated adversaries and attacks, and addressing these attacks may require new approaches to mitigations.
More
Translated text
Key words
Security Standards,Large Language Models,Cybersecurity,Economic Incentives,Hate Speech,Improvement In Range,Malicious Activities,Five-point Likert Scale,Standard Estimates,US Government,Standard Program,Evidential,Virtual Machines,Output Filter,Personal Situation,Malware,Execution Of Operations,Mechanism Of Onset,Text Generation,Input Filter,Injection Attacks,DaVinci,Logical Consistency,Protective Memory
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined