Black-Box Adversarial Attacks on Graph Neural Networks with Limited Node Access
Abstract:
We study the black-box attacks on graph neural networks (GNNs) under a novel and realistic constraint: attackers have access to only a subset of nodes in the network, and they can only attack a small number of them. A node selection step is essential under this setup. We demonstrate that the structural inductive biases of GNN models can...More
Code:
Data:
Tags
Comments