谷歌浏览器插件
订阅小程序
在清言上使用

MACROBERT: Maximizing Certified Region of BERT to Adversarial Word Substitutions.

DASFAA (2)(2021)

引用 0|浏览8
暂无评分
摘要
Deep neural networks are deemed to be powerful but vulnerable, because they will be easily fooled by carefully-crafted adversarial examples. Therefore, it is of great importance to develop models with certified robustness, which can provably guarantee that the prediction will not be easily misled by any possible attack. Recently, although a certified method based on randomized smoothing is proposed, it does not take the maximized certified region into account, so we develop an approach to train models with maximized certified regions via replacing the base classifier with the soft smoothed classifier which is differentiable during propagation.
更多
查看译文
关键词
Randomized smoothing,Adversarial examples,Certified region
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要