Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization
CoRR(2023)
摘要
While significant attention has been dedicated to exploiting weaknesses in
LLMs through jailbreaking attacks, there remains a paucity of effort in
defending against these attacks. We point out a pivotal factor contributing to
the success of jailbreaks: the intrinsic conflict between the goals of being
helpful and ensuring safety. Accordingly, we propose to integrate goal
prioritization at both training and inference stages to counteract.
Implementing goal prioritization during inference substantially diminishes the
Attack Success Rate (ASR) of jailbreaking from 66.4
integrating goal prioritization into model training reduces the ASR from 71.0
to 6.6
samples are included during training, our approach slashes the ASR by half.
Additionally, our findings reveal that while stronger LLMs face greater safety
risks, they also possess a greater capacity to be steered towards defending
against such attacks, both because of their stronger ability in instruction
following. Our work thus contributes to the comprehension of jailbreaking
attacks and defenses, and sheds light on the relationship between LLMs'
capability and safety. Our code is available at
.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要