谷歌浏览器插件
订阅小程序
在清言上使用

LOGICDEF: an Interpretable Defense Framework Against Adversarial Examples Via Inductive Scene Graph Reasoning.

Proceedings of the AAAI Conference on Artificial Intelligence(2022)

引用 8|浏览50
暂无评分
摘要
Deep vision models have provided new capability across a spectrum of applications in transportation, manufacturing, agriculture, commerce, and security. However, recent studies have demonstrated that these models are vulnerable to adversarial attack, exposing a risk-of-use in critical applications where untrusted parties have access to the data environment or even directly to the sensor inputs. Existing adversarial defense methods are either limited to specific types of attacks or are too complex to be applied to practical vision models. More importantly, these methods rely on techniques that are not interpretable to humans. In this work, we argue that an effective defense should produce an explanation as to why the system is attacked, and by using a representation that is easily readable by a human user, e.g. a logic formalism. To this end, we propose logic adversarial defense (LogicDef), a defense framework that utilizes the scene graph of the image to provide a contextual structure for detecting and explaining object classification. Our framework first mines inductive logic rules from the extracted scene graph, and then uses these rules to construct a defense model that alerts the user when the vision model violates the consistency rules. The defense model is interpretable and its robustness is further enhanced by incorporating existing relational commonsense knowledge from projects such as ConceptNet. In order to handle the hierarchical nature of such relational reasoning, we use a curriculum learning approach based on object taxonomy, yielding additional improvements to training and performance.
更多
查看译文
关键词
Machine Learning (ML),Knowledge Representation And Reasoning (KRR),Computer Vision (CV)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要