Adversarial Attack on Graph Structured Data

international conference on machine learning, pp. 1123-1132, 2018.

Cited by: 176|Bibtex|Views234
EI
Other Links: dblp.uni-trier.de|academic.microsoft.com|arxiv.org
Weibo:
We study the adversarial attack on graph structured data

Abstract:

Deep learning on graph structures has shown exciting results in various applications. However, few attentions have been paid to the robustness of such models, in contrast to numerous research work for image or text adversarial attack and defense. In this paper, we focus on the adversarial attacks that fool the model by modifying the combi...More

Code:

Data:

0
Introduction
  • Graph structure plays an important role in many real-world applications. Representation learning on the structured data with deep learning methods has shown promising results in various applications, including drug screening (Duvenaud et al, 2015), protein analysis (Hamilton et al, 2017), knowledge graph completion (Trivedi et al, 2017), etc..

    Despite the success of deep graph networks, the lack of interpretability and robustness of these models make it risky for some financial or security related applications.
  • A graph sensitive evaluation model will typically take the user-user relationship into consideration: a user who connects with many high-credit users may have high credit.
  • Such heuristics learned by the deep graph methods would often yield good predictions, but could put the model in a risk.
Highlights
  • Graph structure plays an important role in many real-world applications
  • Representation learning on the structured data with deep learning methods has shown promising results in various applications, including drug screening (Duvenaud et al, 2015), protein analysis (Hamilton et al, 2017), knowledge graph completion (Trivedi et al, 2017), etc
  • We focus on the graph adversarial attack for a set of graph neural network(GNN) (Scarselli et al, 2009) models
  • Inspired by the recent advances in combinatorial optimization (Bello et al, 2016; Dai et al, 2017), we propose a reinforcement learning based attack method that learns to modify the graph structure with only the prediction feedback from the target classifier
  • We study the adversarial attack on graph structured data
  • We show that a family of Graph Neural Networks models are vulnerable to such attack
Methods
  • In Figure 5(b) and (c), the RL agent connects two nodes who are 4 hops away from each other.
  • This shows that, the target classifier structure2vec is trained with K = 4, it didn’t capture the 4-hop information efficiently.
  • Figure 5(a) shows that, even connecting nodes who are just 2-hop away, the classifier makes mistake on it
Conclusion
  • The authors study the adversarial attack on graph structured data.
  • To perform the efficient attack, the authors proposed three methods, namely RL-S2V, GradArgmax and GeneticAlg for three different attack settings, respectively.
  • The authors show that a family of GNN models are vulnerable to such attack.
  • By visualizing the attack samples, the authors can inspect the target classifier.
  • The authors discussed about defense methods through experiments.
  • The authors' future work includes developing more effective defense algorithms
Summary
  • Introduction:

    Graph structure plays an important role in many real-world applications. Representation learning on the structured data with deep learning methods has shown promising results in various applications, including drug screening (Duvenaud et al, 2015), protein analysis (Hamilton et al, 2017), knowledge graph completion (Trivedi et al, 2017), etc..

    Despite the success of deep graph networks, the lack of interpretability and robustness of these models make it risky for some financial or security related applications.
  • A graph sensitive evaluation model will typically take the user-user relationship into consideration: a user who connects with many high-credit users may have high credit.
  • Such heuristics learned by the deep graph methods would often yield good predictions, but could put the model in a risk.
  • Methods:

    In Figure 5(b) and (c), the RL agent connects two nodes who are 4 hops away from each other.
  • This shows that, the target classifier structure2vec is trained with K = 4, it didn’t capture the 4-hop information efficiently.
  • Figure 5(a) shows that, even connecting nodes who are just 2-hop away, the classifier makes mistake on it
  • Conclusion:

    The authors study the adversarial attack on graph structured data.
  • To perform the efficient attack, the authors proposed three methods, namely RL-S2V, GradArgmax and GeneticAlg for three different attack settings, respectively.
  • The authors show that a family of GNN models are vulnerable to such attack.
  • By visualizing the attack samples, the authors can inspect the target classifier.
  • The authors discussed about defense methods through experiments.
  • The authors' future work includes developing more effective defense algorithms
Tables
  • Table1: Application scenarios for different proposed graph attack methods. Cost is measured by the time complexity for proposing a single attack
  • Table2: Attack graph classification algorithm. We report the 3-class classification accuracy of target model on the vanilla test set I and II, as well as adversarial samples generated. The upper half of the table reports the attack results on test set I, with different levels of access to the information of target classifier. The lower half reports the results of RBA setting on test set II where only RandSampling and RL-S2V can be used. K is the number of propagation steps used in GNN family models (see Eq (3))
  • Table3: Statistics of the graphs used for node classification
  • Table4: Attack node classification algorithm. In the upper half of the table, we report target model accuracy before/after the attack on the test set I, with various settings and methods. In the lower half, we report accuracy on test set II with RBA setting only. In this second part, only RandSampling and RL-S2V can be used
  • Table5: Results after adversarial training by random edge drop
Download tables as Excel
Funding
  • This project was supported in part by NSF IIS-1218749, NIH BIGDATA 1R01GM108341, NSF CAREER IIS-1350983, NSF IIS-1639792 EAGER, NSF CNS-1704701, ONR N00014-15-1-2340, Intel ISTC, NVIDIA and Amazon AWS
  • Tian Tian and Jun Zhu were supported by the National NSF of China (No 61620106010) and Beijing Natural Science Foundation (No L172037)
Reference
  • Akoglu, Leman, Tong, Hanghang, and Koutra, Danai. Graph based anomaly detection and description: a survey. Data Mining and Knowledge Discovery, 29(3):626–688, 2015.
    Google ScholarLocate open access versionFindings
  • Bello, Irwan, Pham, Hieu, Le, Quoc V, Norouzi, Mohammad, and Bengio, Samy. Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940, 2016.
    Findings
  • Buckman, Jacob, Roy, Aurko, Raffel, Colin, and Goodfellow, Ian. Thermometer encoding: One hot way to resist adversarial examples. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=S18Su--CW.
    Locate open access versionFindings
  • Chen, Pin-Yu, Zhang, Huan, Sharma, Yash, Yi, Jinfeng, and Hsieh, Cho-Jui. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26. ACM, 2017.
    Google ScholarLocate open access versionFindings
  • Dai, Hanjun, Dai, Bo, and Song, Le. Discriminative embeddings of latent variable models for structured data. In ICML, 2016.
    Google ScholarLocate open access versionFindings
  • Dai, Hanjun, Khalil, Elias B, Zhang, Yuyu, Dilkina, Bistra, and Song, Le. Learning combinatorial optimization algorithms over graphs. arXiv preprint arXiv:1704.01665, 2017.
    Findings
  • Duvenaud, David K, Maclaurin, Dougal, Iparraguirre, Jorge, Bombarell, Rafael, Hirzel, Timothy, Aspuru-Guzik, Alan, and Adams, Ryan P. Convolutional networks on graphs for learning molecular fingerprints. In Advances in Neural Information Processing Systems, pp. 2215–2223, 2015.
    Google ScholarLocate open access versionFindings
  • Gilmer, Justin, Schoenholz, Samuel S, Riley, Patrick F, Vinyals, Oriol, and Dahl, George E. Neural message passing for quantum chemistry. arXiv preprint arXiv:1704.01212, 2017.
    Findings
  • Goodfellow, Ian J, Shlens, Jonathon, and Szegedy, Christian. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
    Findings
  • Lei, Tao, Jin, Wengong, Barzilay, Regina, and Jaakkola, Tommi. Deriving neural architectures from sequence and graph kernels. arXiv preprint arXiv:1705.09037, 2017.
    Findings
  • Li, Yujia, Tarlow, Daniel, Brockschmidt, Marc, and Zemel, Richard. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493, 2015.
    Findings
  • Miikkulainen, Risto, Liang, Jason, Meyerson, Elliot, Rawal, Aditya, Fink, Dan, Francon, Olivier, Raju, Bala, Navruzyan, Arshak, Duffy, Nigel, and Hodjat, Babak. Evolving deep neural networks. arXiv preprint arXiv:1703.00548, 2017.
    Findings
  • Moosavi-Dezfooli, Seyed-Mohsen, Fawzi, Alhussein, and Frossard, Pascal. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582, 2016.
    Google ScholarLocate open access versionFindings
  • Papernot, Nicolas, McDaniel, Patrick, Goodfellow, Ian, Jha, Somesh, Celik, Z Berkay, and Swami, Ananthram. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519. ACM, 2017.
    Google ScholarLocate open access versionFindings
  • Real, Esteban, Moore, Sherry, Selle, Andrew, Saxena, Saurabh, Suematsu, Yutaka Leon, Le, Quoc, and Kurakin, Alex. Large-scale evolution of image classifiers. arXiv preprint arXiv:1703.01041, 2017.
    Findings
  • Scarselli, Franco, Gori, Marco, Tsoi, Ah Chung, Hagenbuchner, Markus, and Monfardini, Gabriele. The graph neural network model. Neural Networks, IEEE Transactions on, 20(1):61–80, 2009.
    Google ScholarLocate open access versionFindings
  • Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014.
    Google ScholarLocate open access versionFindings
  • Su, Jiawei, Vargas, Danilo Vasconcellos, and Kouichi, Sakurai. One pixel attack for fooling deep neural networks. arXiv preprint arXiv:1710.08864, 2017.
    Findings
  • Szegedy, Christian, Zaremba, Wojciech, Sutskever, Ilya, Bruna, Joan, Erhan, Dumitru, Goodfellow, Ian, and Fergus, Rob. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
    Findings
  • Trivedi, Rakshit, Dai, Hanjun, Wang, Yichen, and Song, Le. Know-evolve: Deep temporal reasoning for dynamic knowledge graphs. In ICML, 2017.
    Google ScholarLocate open access versionFindings
  • Zugner, Daniel, Akbarnejad, Amir, and Gunnemann, Stephan. Adversarial attacks on neural networks for graph data. In KDD, 2018.
    Google ScholarLocate open access versionFindings
Full Text
Your rating :
0

 

Tags
Comments