所有文章 > 正文

近期必读的GNN论文大全丨NeurIPS 2020

作者: AMiner

浏览量: 775

时间: 2020-10-28 02:57

关键词: AI顶会,GNN,NeurIPS

和小脉,一起看NeurIP S2020论文!

AMiner平台由清华大学计算机系研发,拥有我国完全自主知识产权。平台包含了超过2.3亿学术论文/专利和1.36亿学者的科技图谱,提供学者评价、专家发现、智能指派、学术地图等科技情报专业化服务。系统2006年上线,吸引了全球220个国家/地区1000多万独立IP访问,数据下载量230万次,年度访问量超过1100万,成为学术搜索和社会网络挖掘研究的重要数据和实验平台。 

AMiner:https://www.aminer.cn/

NeurIPS,全称神经信息处理系统大会(Conference and Workshop on Neural Information Processing Systems), 是机器学习领域的顶级会议,会议固定在每年的12月举行 。NeurIPS2020,提交数再次创新高,与去年相比增加了38%,共计达到9454篇,总接收1900篇,其中谷歌以202篇傲视群雄,清华大学64篇。论文接收率20.09%较去年有所下降。
根据AMiner-NeurIPS 2020词云图,小脉发现Reinforcement Learning、Neural Network、Graph Neural Network、Deep Neural Network等都是今年比较火的Topic,不管是投稿量还是论文接受率都较高。那今天小脉和大家分享的是NeurIPS 2020上关于Graph Neural Network 相关论文。

文章数量较多,小脉就抽取10篇给大家展示,更多GNN论文请移步:https://www.aminer.cn/conf/neurips2020?f=wx


1.论文名称:Bandit Samplers for Training Graph Neural Networks
论文地址:https://www.aminer.cn/pub/5ee3526a91e011cb3bff73b6?conf=neurips2020
论文简介:

  • The authors show that the optimal layer samplers based on importance sampling for training general graph neural networks are computationally intractable, since it needs all the neighbors’ hidden embeddings or learned weights.
  • The authors re-formulate the sampling problem as a bandit problem that requires only partial knowledges from neighbors being sampled.
  • The authors propose two algorithms based on multi-armed bandit and MAB with multiple plays, and show the variance of the bandit sampler asymptotically approaches the optimum within a factor of 3.
  • The authors empirically show that the algorithms can converge to better results with faster rates and lower variances compared with state-of-the-art approaches


2.论文名称:Can graph neural networks count substructures?
论文地址:https://www.aminer.cn/pub/5e427c903a55acbff4c40b1d?conf=neurips2020
论文简介:

  • The authors propose a theoretical framework to study the expressive power of classes of GNNs based on their ability to count substructures.
  • The authors provide an upper bound on the size of “path-shaped” substructures that finite iterations of k-WL can matching-count.
  • To establish these results, the authors prove an equivalence between approximating graph functions and discriminating graphs.
  • The authors build the foundation for using substructure counting as an intuitive and relevant measure of the expressive power of GNNs, and the concrete results for existing GNNs motivate the search for more powerful designs of GNNs



3.论文名称:Towards Deeper Graph Neural Networks with Differentiable Group Normalization
论文地址:https://www.aminer.cn/pub/5ee7495191e01198a507f880?conf=neurips2020
论文简介:

  • The authors propose two over-smoothing metrics based on graph structures, i.e., group distance ratio and instance information gain.
  • By inspecting GNN models through the lens of these two metrics, the authors present a novel normalization layer, DGN, to boost model performance against oversmoothing.
  • It normalizes each group of similar nodes independently to separate node representations of different classes.
  • The authors' research will facilitate deep learning models for potential graph applications


4.论文名称:Implicit Graph Neural Networks
论文地址:https://www.aminer.cn/pub/5f6099ec91e01138058701d5?conf=neurips2020
论文简介:

  • The authors present the implicit graph neural network model, a framework of recurrent graph neural networks.
  • Similar to some other recurrent graph neural network models, implicit graph neural network captures the long-range dependency, but it carries the advantage further with a superior performance in a variety of tasks, through rigorous conditions for convergence and exact efficient gradient steps.
  • Backed by more rigorous mathematical arguments, the research improves the capability GNNs of capturing the long-range dependency and boosts the performance on these applications


5.论文名称:Factor Graph Neural Network
论文地址:https://www.aminer.cn/pub/5d04e8f7da56295d08dca708?conf=neurips2020
论文简介:

  • The authors extend graph neural networks to factor graph neural networks, enabling the network to capture higher order dependencies among the variables.
  • The factor graph neural networks can represent the execution of the Max-Product Belief Propagation algorithm on probabilistic graphical models, allowing it to do well when Max-Product does well; at the same time, it has the potential to learn better inference algorithms from data when Max-Product fails.
  • The relationship to graphical model inference through the Max-Product algorithm provides a guide on how knowledge on dependencies can be added into the factor graph neural networks


6.论文名称:Building powerful and equivariant graph neural networks with message-passin
论文地址:https://www.aminer.cn/pub/5ef9c12e91e011b84e1f8d31?conf=neurips2020
论文简介:

  • The authors introduced structural message-passing (SMP), a new architecture that is both powerful and permutation equivariant, solving a major weakness of previous message-passing networks.
  • SMP significantly outperforms previous models in learning graph topological properties.
  • The authors believe that the work paves the way to graph neural networks that efficiently manipulate both node and topological features, with potential applications to chemistry, computational biology and neural algorithmic reasoning.
  • This paper introduced a new methodology for building graph neural networks, conceived independently of a specific application.
  • The wide applicability of graph neural networks makes it challenging to foresee how the method will be used and the ethical problems which might occur


7.论文名称:Adversarial Attack on Graph Neural Networks with Limited Node Access
论文地址:https://www.aminer.cn/pub/5f7fdd328de39f08283979ac?conf=neurips2020
论文简介:

  • The authors propose a novel black-box adversarial attack setup for GNN models with constraint of limited node access, which the authors believe is by far the most restricted and realistic black-box attack setup
  • Through both theoretical analyses and empirical experiments, the authors demonstrate that the strong and explicit structural inductive biases of GNN models make them still vulnerable to this type of adversarial attacks.
  • Even without accessing any information about the model training, the graph structure alone can be exploited to damage a deep learning framework with a rather executable strategy


8.论文名称:Optimization and Generalization Analysis of Transduction through Gradient Boosting and Application to Multi-scale Graph Neural Networks
论文地址:https://www.aminer.cn/pub/5ee8986f91e011e66831c6e7?conf=neurips2020
论文简介:

  • Satisfiability of Weak Learning Condition The key assumption of the theory is the w.l.c. (Assumption 1).
  • Satisfiability of Weak Learning Condition The key assumption of the theory is the w.l.c.
  • Let V, Vg ⊂ RN , and α > β ≥ 0 such that {−1, 0, 1}N ⊂ Vg. If for any g ∈ Vg, there existsIn this study, the authors analyzed a certain type of transductive learning models and derived their optimization and generalization guarantees under the weak learnability condition (w.l.c.).
  • To the best of the knowledge, this is the first result that multi-scale GNNs provably avoid the over-smoothing from the viewpoint of learning theory.
  • The authors believe that exploring deeper relationships between the w.l.c. and the underlying graph structures such as graph spectra is a promising direction for future research


9.论文名称:GNNGuard: Defending Graph Neural Networks against Adversarial Attacks
论文地址:https://www.aminer.cn/pub/5ee8986891e011e66831c556?conf=neurips2020
论文简介:

  • The authors introduce GNNGUARD, an algorithm for defending any graph neural network (GNN) against poisoning attacks, including direct targeted, influence targeted, and non-targeted attacks.
  • GNNGUARD mitigates adverse effects by modifying neural message passing of the underlying GNN.
  • This is achieved through the estimation of neighbor relevance and the use of graph memory, which are two critical components that are vital for a successful defense.
  • It would be interesting to extend GNNGUARD to fight adversaries that exploit structural equivalence
  • While such adversarial attackers do not exist yet, this is a fruitful future direction


10.论文名称:Path Integral Based Convolution and Pooling for Graph Neural Networks
论文地址:https://www.aminer.cn/pub/5efcb5a791e011520324582a?conf=neurips2020
论文简介:

  • The authors propose a path integral based GNN framework (PAN), which consists of self-consistent convolution and pooling units, the later is closely related to the subgraph centrality.
  • PAN can be seen as a class of generalization of GNN.
  • PAN achieves excellent performances on various graph classification and regression tasks, while demonstrating fast convergence rate and great stability.
  • The authors introduce a new graph classification dataset PointPattern which can serve as a new benchmark


扫码了解更多NeurIPS2020会议信息


添加“小脉”微信,留言“NeurIPS”,即可加入【NeurIPS会议交流群】,与更多论文作者学习交流!


扫码微信阅读
[关于转载]:本文转载于AMiner,仅用于学术分享,有任何问题请与我们联系:report@aminer.cn。