Weak-to-Strong Jailbreaking on Large Language Models
CoRR(2024)
Abstract
Although significant efforts have been dedicated to aligning large language
models (LLMs), red-teaming reports suggest that these carefully aligned LLMs
could still be jailbroken through adversarial prompts, tuning, or decoding.
Upon examining the jailbreaking vulnerability of aligned LLMs, we observe that
the decoding distributions of jailbroken and aligned models differ only in the
initial generations. This observation motivates us to propose the
weak-to-strong jailbreaking attack, where adversaries can utilize smaller
unsafe/aligned LLMs (e.g., 7B) to guide jailbreaking against significantly
larger aligned LLMs (e.g., 70B). To jailbreak, one only needs to additionally
decode two smaller LLMs once, which involves minimal computation and latency
compared to decoding the larger LLMs. The efficacy of this attack is
demonstrated through experiments conducted on five models from three different
organizations. Our study reveals a previously unnoticed yet efficient way of
jailbreaking, exposing an urgent safety issue that needs to be considered when
aligning LLMs. As an initial attempt, we propose a defense strategy to protect
against such attacks, but creating more advanced defenses remains challenging.
The code for replicating the method is available at
https://github.com/XuandongZhao/weak-to-strong
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined