Defending Against Adversarial Examples via Soft Decision Trees Embedding

Proceedings of the 27th ACM International Conference on Multimedia(2019)

引用 3|浏览137
暂无评分
摘要
Convolutional neural networks (CNNs) have shown vulnerable to adversarial examples which contain imperceptible perturbations. In this paper, we propose an approach to defend against adversarial examples with soft decision trees embedding. Firstly, we extract the semantic features of adversarial examples with a feature extraction network. Then, a specific soft decision tree is trained and embedded to select the key semantic features for each feature map from convolutional layers and the selected features are fed to a light-weight classification network. To this end, we use the probability distributions of each tree node to quantify the semantic features. In this way, some small perturbations can be effectively removed and the selected features are more discriminative in identifying adversarial examples. Moreover, the influence of adversarial perturbations on classification can be reduced by migrating the interpretability of soft decision trees into the black-box neural networks. We conduct experiments to defend the state-of-the-art adversarial attacks. The experimental results demonstrate that our proposed approach can effectively defend against these attacks and improve the robustness of deep neural networks.
更多
查看译文
关键词
adversarial examples, convolutional neural networks, soft decision trees
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要