Word Level Robustness Enhancement: Fight Perturbation with Perturbation
Proceedings of the AAAI Conference on Artificial Intelligence(2022)
摘要
State-of-the-art deep NLP models have achieved impressive improvements on many tasks. However, they are found to be vulnerable to some perturbations. Before they are widely adopted, the fundamental issues of robustness need to be addressed. In this paper, we design a robustness enhancement method to defend against word substitution perturbation, whose basic idea is to fight perturbation with perturbation. We find that: although many well-trained deep models are not robust in the setting of the presence of adversarial samples, they satisfy weak robustness. That means they can handle most non-crafted perturbations well. Taking advantage of the weak robustness property of deep models, we utilize non-crafted perturbations to resist the adversarial perturbations crafted by attackers. Our method contains two main stages. The first stage is using randomized perturbation to conform the input to the data distribution. The second stage is using randomized perturbation to eliminate the instability of prediction results and enhance the robustness guarantee. Experimental results show that our method can significantly improve the ability of deep models to resist the state-of-the-art adversarial attacks while maintaining the prediction performance on the original clean data.
更多查看译文
关键词
Speech & Natural Language Processing (SNLP)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要