AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We have unified existing Message passing neural network approaches proposed on a wide range of networks such as multi-relational graphs, hypergraphs, heterogeneous graphs, etc

Neural Message Passing for Multi-Relational Ordered and Recursive Hypergraphs

NIPS 2020, (2020)

Cited by: 0|Views15
EI
Full Text
Bibtex
Weibo

Abstract

Message passing neural network (MPNN) has recently emerged as a successful framework by achieving state-of-the-art performances on many graph-based learning tasks. MPNN has also recently been extended to multi-relational graphs (each edge is labelled), and hypergraphs (each edge can connect any number of vertices). However, in real-world ...More

Code:

Data:

0
Introduction
  • Message passing neural network (MPNN) has recently emerged as a successful framework by achieving state-of-the-art performances on many graph-based learning tasks [21].
  • Hyperedges can be multi-relational with vertices appearing in a fixed order.
  • The authors illustrate such structures with a toy example in Figure 1.
  • Structured hypergraphs are seen in academic network datasets.
  • Such structures present several unique challenges because it is not clear how to adapt MPNN to variable-sized hyperedges in them.
Highlights
  • Message passing neural network (MPNN) has recently emerged as a successful framework by achieving state-of-the-art performances on many graph-based learning tasks [21]
  • In section 4, we introduce recursive hypergraphs, and extend MPNN to MPNN-R (MPNN-Recursive) for recursive hypergraphs
  • Inspired by Graph Convolutional Networks (GCNs), we propose a Laplacian-based MPNN-R for recursive hypergraphs
  • Our proposed methods are more effective than HGNN and HyperGCN
  • We have unified existing MPNN approaches proposed on a wide range of networks such as multi-relational graphs, hypergraphs, heterogeneous graphs, etc
  • We see opportunities for research applying our work for beneficial puroposes, such as investigating whether we could improve performance of natural language processing (NLP) tasks such as machine reading comprehension, relation extraction, machine translation, and many more
  • We have proposed a novel framework, MPNN-R, on recursive hypergraphs
Methods
  • Cora DBLP ACM arXiv MLP HGNN HyperGCN HetGNN HAN MAGNN 5.1 MPNN-R
Results
  • The authors optimise all hyperparameters of all baselines and the method using grid search.
  • Table 2 shows the results of semi-supervised vertex classification on real-world datasets.
  • The authors' proposed methods are more effective than HGNN and HyperGCN.
  • The authors believe this is because the two baselines do not exploit the positional and relational information in the hypergraph.
  • The authors have conducted ablation studies by removing positional and relational information and the results are in the appendix.
  • The authors have conducted experiments on transductive datasets and the results
Conclusion
  • As the authors can see from Table 2, MLP is least effective on all datasets. This shows that the input recursive hypergraph is informative in classifying vertices.
  • The authors' proposed method MPNN-R is able to consistently outperform all the baselines including HetGNN.
  • Future works include exploiting relational and positional information in recursive hypergraphs for natural language processing tasks.
  • Another interesting direction is to investigate and extend the recent subgraph reasoning methods [74, 57] for inductive vertex embedding for multi-relational ordered hypergraphs.
Tables
  • Table1: Different existing instantiations of G-MPNN on different structures
  • Table2: Results of SSL experiments. We report mean test error ± standard deviation (lower is better) over 100 train-test splits. Please refer to section 5.1 for details
  • Table3: Results of link prediction experiments. We report MRR, Hits@1, and Hits@3 (higher is better). Please refer to section 5.2 for details
Download tables as Excel
Related work
  • In this section, we discuss related work. In particular, we discuss relevant work on MPNNs, and their explorations on multi-relational graphs (including on the closely related multiplex networks, and heterogeneous graphs), and hypergraphs.

    2.1 Message-Passing Neural Networks (MPNNs)

    MPNNs were originally proposed as a framework for deep learning on graphs [21]. MPNN has inspired the current state-of-the-art techniques for graph representation learning (GRL). The reader is referred to comprehensive reviews [7, 4, 76] and extensive surveys [27, 78] on this topic of GRL. The message-passing operation in the MPNN framework can be viewed as recursive neighbourhood aggregation, where local neighbourhood messages are aggregated and passed on to neighbouring vertices [18]. Notable instances of the MPNN framework include popular graph neural networks such as Graph Convolutional Networks (GCNs) [36], ChebNet [14], GraphSAGE [26], Graph Attention Networks [60], Neural Fingerprints [15], Gated Graph Sequence Neural Networks (GGNN) [42], Graph Isomorphism Networks [68], etc. GNNs (and MPNNs) came into existence thanks to two seminal publications on convolutional [8] and recurrent [52] neural networks on graphs. The MPNN framework has been extended to multi-relational graphs in several ways which we discuss next.
Funding
  • This work is supported by the Ministry of Human Resource Development (Government of India)
Study subjects and analysis
documents: 10
E1 contains depth 1 hyperedges of co-authorship relationships (all documents co-authored by an author belong to a hyperedge). An author (depth 1 hyperedge) can be a co-author of, say, 10 documents. Each of these 10 documents represents a depth 0 hyperedge connecting all its cited documents

documents: 10
An author (depth 1 hyperedge) can be a co-author of, say, 10 documents. Each of these 10 documents represents a depth 0 hyperedge connecting all its cited documents. 4.1 Laplacian-based MPNN

Reference
  • Sameer Agarwal, Kristin Branson, and Serge Belongie. Higher order learning with graphs. In Proceedings of the 23rd International Conference on Machine Learning (ICML), page 17–24, 2006. 3.
    Google ScholarLocate open access versionFindings
  • John Baez. Incidence Geometry. University of California, Riverside, 2014. 6.
    Google ScholarFindings
  • Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1957–1967, 2017. 2.
    Google ScholarLocate open access versionFindings
  • Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinícius Flores Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Çaglar Gülçehre, H. Francis Song, Andrew J. Ballard, Justin Gilmer, George E. Dahl, Ashish Vaswani, Kelsey R. Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matthew Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pascanu. Relational inductive biases, deep learning, and graph networks. Computing Research Repository (CoRR), abs/1806.01261, 2018. 2.
    Google ScholarLocate open access versionFindings
  • Inci M Baytas, Cao Xiao, Fei Wang, Anil K. Jain, and Jiayu Zhou. Hhne: Heterogeneous hyper-network embedding. In IEEE International Conference on Data Mining (ICDM), pages 875–880, 2018. 3.
    Google ScholarLocate open access versionFindings
  • Antoine Bordes, Nicolas Usunier, Alberto Garcia-Durán, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In Proceedings of the 26th International Conference on Neural Information Processing Systems (NeurIPS) - Volume 2, page 2787–2795. Curran Associates Inc., 2013. 8.
    Google ScholarLocate open access versionFindings
  • Michael M. Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: Going beyond euclidean data. IEEE Signal Process., 34(4):18–42, 2012.
    Google ScholarLocate open access versionFindings
  • Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. In International Conference on Learning Representations (ICLR), 2014. 2 and 6.
    Google ScholarLocate open access versionFindings
  • Yukuo Cen, Xu Zou, Jianwei Zhang, Hongxia Yang, Jingren Zhou, and Jie Tang. Representation learning for attributed multiplex heterogeneous network. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), pages 1358–1368, 2013 and 4.
    Google ScholarLocate open access versionFindings
  • T.-H. Hubert Chan and Zhibin Liang. Generalizing the hypergraph laplacian via a diffusion process with mediators. Theor. Comput. Sci., pages 416–428, 2020. 3 and 7.
    Google ScholarLocate open access versionFindings
  • Hongxu Chen, Hongzhi Yin, Xiangguo Sun, Tong Chen, Bogdan Gabrys, and Katarzyna Musial. Multi-level graph convolutional networks for cross-platform anchor link prediction. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), 2020. 3.
    Google ScholarLocate open access versionFindings
  • Zhengdao Chen, Xiang Li, and Joan Bruna. Supervised community detection with line graph neural networks. In International Conference on Learning Representations (ICLR), 2019. 3.
    Google ScholarLocate open access versionFindings
  • Uthsav Chitra and Benjamin Raphael. Random walks on hypergraphs with edge-dependent vertex weights. In Proceedings of the 36th International Conference on Machine Learning (ICML), pages 1172–1181, 2019. 6.
    Google ScholarLocate open access versionFindings
  • Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems (NeurIPS) 29, pages 3844–3852. Curran Associates, Inc., 2016. 2 and 6.
    Google ScholarLocate open access versionFindings
  • David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in Neural Information Processing Systems (NeurIPS) 28, pages 2224–2232. Curran Associates, Inc., 202 and 4.
    Google ScholarLocate open access versionFindings
  • Bahare Fatemi, Perouz Taslakian, David Vazquez, and David Poole. Knowledge hypergraphs: Prediction beyond binary relations. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI), 2020. 1, 5, and 8.
    Google ScholarLocate open access versionFindings
  • Yifan Feng, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao. Hypergraph neural networks. In Proceedings of the Thirty-Third Conference on Association for the Advancement of Artificial Intelligence (AAAI), pages 3558–3565, 2019. 3, 4, 6, 7, and 8.
    Google ScholarLocate open access versionFindings
  • Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. Computing Research Repository (CoRR), abs/1903.02428, 2019. 2.
    Google ScholarLocate open access versionFindings
  • Xinyu Fu, Jiani Zhang, Ziqiao Meng, and Irwin King. Magnn: Metapath aggregated graph neural network for heterogeneous graph embedding. In Proceedings of The Web Conference (WWW), page 2331–2341, 2020. 3, 4, and 7.
    Google ScholarLocate open access versionFindings
  • Jean Gallier. Discrete Mathematics. Springer, 206.
    Google ScholarLocate open access versionFindings
  • Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 1263–1272, 2017. 1, 2, 3, and 4.
    Google ScholarLocate open access versionFindings
  • Liyu Gong and Qiang Cheng. Exploiting edge features for graph neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 9211–9219, 2019. 3.
    Google ScholarLocate open access versionFindings
  • Saiping Guan, Xiaolong Jin, Jiafeng Guo, Yuanzhuo Wang, and Xueqi Cheng. Neuinfer: Knowledge inference on n-ary facts. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), 2020. 5.
    Google ScholarLocate open access versionFindings
  • Saiping Guan, Xiaolong Jin, Yuanzhuo Wang, and Xueqi Cheng. Link prediction on n-ary relational data. In The World Wide Web Conference (WWW), page 583–593, 2019. 5.
    Google ScholarLocate open access versionFindings
  • Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto. Knowledge transfer for out-of-knowledge-base entities: A graph neural network approach. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, (IJCAI), pages 1802– 1808, 2017. 2 and 4.
    Google ScholarLocate open access versionFindings
  • Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems (NeurIPS) 30, pages 1024–1034. Curran Associates, Inc., 2017. 2, 4, and 5.
    Google ScholarLocate open access versionFindings
  • William L. Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. IEEE Data Eng. Bull., 40(3):52–74, 2017. 2.
    Google ScholarLocate open access versionFindings
  • Matthias Hein, Simon Setzer, Leonardo Jost, and Syama Sundar Rangapuram. The total variation on hypergraphs - learning on hypergraphs revisited. In Advances in Neural Information Processing Systems (NeurIPS) 26, pages 2427–2435. Curran Associates, Inc., 2013. 3.
    Google ScholarLocate open access versionFindings
  • Ziniu Hu, Yuxiao Dong, Kuansan Wang, and Yizhou Sun. Heterogeneous graph transformer. In Proceedings of The Web Conference (WWW), page 2704–2710, 2020. 3 and 4.
    Google ScholarLocate open access versionFindings
  • Jianwen Jiang, Yuxuan Wei, Yifan Feng, Jingxuan Cao, and Yue Gao. Dynamic hypergraph neural networks. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI), pages 2635–2641, 2019. 3 and 4.
    Google ScholarLocate open access versionFindings
  • Taisong Jin, Liujuan Cao, Baochang Zhang, Xiaoshuai Sun, Cheng Deng, and Rongrong Ji. Hypergraph induced convolutional manifold networks. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI), pages 2670–2676, 2019. 3.
    Google ScholarLocate open access versionFindings
  • Cliff Joslyn and Kathleen Nowak. Ubergraphs: A definition of a recursive hypergraph structure. Computing Research Repository (CoRR), abs/1704.05547, 2017. 5.
    Google ScholarLocate open access versionFindings
  • Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. Knowledge base completion: Baselines strike back. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 69–74, 2017. 8.
    Google ScholarLocate open access versionFindings
  • Muhammed R Khan and Joshua Blummenstock. Multi-gcn: Graph convolutional networks for multi-view networks, with applications to global poverty. In Proceedings of the Thirty-Third Conference on Association for the Advancement of Artificial Intelligence (AAAI), pages 606–613, 2019. 3 and 4.
    Google ScholarLocate open access versionFindings
  • Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, International Conference on Learning Representations, (ICLR), 2015. 8 and 9.
    Google ScholarLocate open access versionFindings
  • Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017. 2, 4, 6, 7, and 8.
    Google ScholarLocate open access versionFindings
  • Pan Li, Niao He, and Olgica Milenkovic. Quadratic decomposable submodular function minimization. In Advances in Neural Information Processing Systems (NeurIPS) 31, pages 1054–1064. Curran Associates, Inc., 2018. 3.
    Google ScholarLocate open access versionFindings
  • Pan Li and Olgica Milenkovic. Inhomogeneous hypergraph clustering with applications. In Advances in Neural Information Processing Systems (NeurIPS) 30, pages 2308–2318. Curran Associates, Inc., 2017. 6.
    Google ScholarLocate open access versionFindings
  • Pan Li and Olgica Milenkovic. Revisiting decomposable submodular function minimization with incidence relations. In Advances in Neural Information Processing Systems (NeurIPS) 31, pages 2237–2247. Curran Associates, Inc., 2018. 3.
    Google ScholarLocate open access versionFindings
  • Pan Li and Olgica Milenkovic. Submodular hypergraphs: p-laplacians, Cheeger inequalities and spectral clustering. In Proceedings of the 35th International Conference on Machine Learning (ICML), pages 3014–3023, 2018. 3.
    Google ScholarLocate open access versionFindings
  • Shu Li, Wen-Tao Li, and Wei Wang. Co-gcn for multi-view semi-supervised learning. In Proceedings of the Thirty-Fourth Conference on Association for the Advancement of Artificial Intelligence (AAAI), 2020. 3 and 4.
    Google ScholarLocate open access versionFindings
  • Yujia N Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. In International Conference on Learning Representations (ICLR), 2016. 2 and 4.
    Google ScholarLocate open access versionFindings
  • Hu Linmei, Tianchi Yang, Chuan Shi, Houye Ji, and Xiaoli Li. Heterogeneous graph attention networks for semi-supervised short text classification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4823–4832, 2019. 3 and 4.
    Google ScholarLocate open access versionFindings
  • Yu Liu, Quanming Yao, and Yong Li. Generalizing tensor decomposition for n-ary relational knowledge bases. In Proceedings of The Web Conference (WWW), page 1104–1114, 2020. 5.
    Google ScholarLocate open access versionFindings
  • Chaitanya Malaviya, Chandra Bhagavatula, Antoine Bosselut, and Yejin Choi. Commonsense knowledge base completion with structural and semantic context. In Proceedings of the ThirtyFourth Conference on Association for the Advancement of Artificial Intelligence (AAAI), 2020. 2 and 4.
    Google ScholarLocate open access versionFindings
  • Diego Marcheggiani and Ivan Titov. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1506–1515, 2017. 2.
    Google ScholarLocate open access versionFindings
  • Telmo Menezes and Camille Roth. Semantic hypergraphs. Computing Research Repository (CoRR), abs/1908.10784, 2019. 1 and 5.
    Google ScholarLocate open access versionFindings
  • Deepak Nathani, Jatin Chauhan, Charu Sharma, and Manohar Kaul. Learning attention-based embeddings for relation prediction in knowledge graphs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), pages 4710–4723, 2019. 2 and 4.
    Google ScholarLocate open access versionFindings
  • Chanyoung Park, Donghyun Kim, Jiawei Han, and Hwanjo Yu. Unsupervised attributed multiplex network embedding. In Proceedings of the Thirty-Fourth Conference on Association for the Advancement of Artificial Intelligence (AAAI), 2020. 3 and 4.
    Google ScholarLocate open access versionFindings
  • Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS) 32, pages 8026–8037. Curran Associates, Inc., 2019. 8 and 9.
    Google ScholarLocate open access versionFindings
  • Paolo Rosso, Dingqi Yang, and Philippe Cudré-Mauroux. Beyond triplets: Hyper-relational knowledge graph embedding for link prediction. In Proceedings of The Web Conference (WWW), page 1885–1896, 2020. 5.
    Google ScholarLocate open access versionFindings
  • Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. Trans. Neur. Netw., 20(1):61–80, 2009. 2.
    Google ScholarLocate open access versionFindings
  • Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In Extended Semantic Web Conference (ESWC), pages 593–607, 2018. 2 and 4.
    Google ScholarLocate open access versionFindings
  • Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, and Bowen Zhou. End-to-end structure-aware convolutional networks for knowledge base completion. In Proceedings of the Thirty-Third Conference on Association for the Advancement of Artificial Intelligence (AAAI), pages 4424–4431, 2019. 2.
    Google ScholarLocate open access versionFindings
  • David I. Shuman, Sunil K. Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process. Mag., 30(3):83–98, 2013. 6.
    Google ScholarLocate open access versionFindings
  • Martin Simonovsky and Nikos Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 29–38, 2017. 2 and 4.
    Google ScholarLocate open access versionFindings
  • Komal K. Teru, Etienne Denis, and William L. Hamilton. Inductive relation prediction by subgraph reasoning. In Proceedings of the 37th International Conference on Machine Learning (ICML), 2020. 5 and 9.
    Google ScholarLocate open access versionFindings
  • Shikhar Vashishth, Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha Talukdar. Dating documents using graph convolution networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1605–1615, 2018. 2.
    Google ScholarLocate open access versionFindings
  • Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha Talukdar. Composition-based multi-relational graph convolutional networks. In International Conference on Learning Representations (ICLR), 2020. 2, 4, and 5.
    Google ScholarLocate open access versionFindings
  • Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations (ICLR), 2018. 2.
    Google ScholarLocate open access versionFindings
  • Petar Velickovic, William Fedus, William L. Hamilton, Pietro Lió, Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. In International Conference on Learning Representations (ICLR), 2019. 3.
    Google ScholarLocate open access versionFindings
  • Petar Velickovic, Rex Ying, Matilde Padovano, Raia Hadsell, and Charles Blundell. Neural execution of graph algorithms. In International Conference on Learning Representations (ICLR), 2020. 9.
    Google ScholarLocate open access versionFindings
  • Peifeng Wang, Jialong Han, Chenliang Li, and Rong Pan. Logic attention based neighborhood aggregation for inductive knowledge graph embedding. In Proceedings of the Thirty-Third Conference on Association for the Advancement of Artificial Intelligence (AAAI), pages 7152– 7159, 2019. 2 and 4.
    Google ScholarLocate open access versionFindings
  • Pengyang Wang, Jiaping Gui, Zhengzhang Chen, Junghwan Rhee, Haifeng Chen, and Yanjie Fu. A generic edge-empowered graph convolutional network via node-edge mutual enhancement. In Proceedings of The Web Conference (WWW), page 2144–2154, 2020. 3.
    Google ScholarLocate open access versionFindings
  • Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Yanfang Ye, Peng Cui, and Philip S Yu. Heterogeneous graph attention network. In The World Wide Web Conference (WWW), pages 2022–2032, 2019. 3, 4, and 7.
    Google ScholarLocate open access versionFindings
  • Jianfeng Wen, Jianxin Li, Yongyi Mao, Shini Chen, and Richong Zhang. On the representation and embedding of knowledge bases beyond binary relations. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI), page 1300–1307, 2016. 1 and 5.
    Google ScholarLocate open access versionFindings
  • Chris Wendler, Markus Püschel, and Dan Alistarh. Powerset convolutional neural networks. In Advances in Neural Information Processing Systems (NeurIPS) 32, pages 927–938. Curran Associates, Inc., 2019. 3 and 4.
    Google ScholarLocate open access versionFindings
  • Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations (ICLR), 2019. 2 and 4.
    Google ScholarLocate open access versionFindings
  • Xiaoran Xu, Wei Feng, Yunsheng Jiang, Xiaohui Xie, Zhiqing Sun, and Zhi-Hong Deng. Dynamically pruned message passing networks for large-scale knowledge graph reasoning. In International Conference on Learning Representations (ICLR), 2020. 2 and 4.
    Google ScholarLocate open access versionFindings
  • Naganand Yadati, Madhav Nimishakavi, Prateek Yadav, Vikram Nitin, Anand Louis, and Partha Talukdar. HyperGCN: A new method of training graph convolutional networks on hypergraphs. In Advances in Neural Information Processing Systems (NeurIPS) 32, pages 1509–1520. Curran Associates, Inc., 2019. 3, 4, 7, and 8.
    Google ScholarLocate open access versionFindings
  • Chaoqi Yang, Ruijie Wang, Shuochao Yao, and Tarek Abdelzaher. Hypergraph learning with line expansion. Computing Research Repository (CoRR), abs/2005.04843, 2020. 4.
    Google ScholarLocate open access versionFindings
  • Chenzi Zhang, Shuguang Hu, Zhihao Gavin Tang, and T-H. Hubert Chan. Re-revisiting learning on hypergraphs: Confidence interval and subgradient method. In Proceedings of 34th International Conference on Machine Learning (ICML), pages 4026–4034, 2017. 3.
    Google ScholarLocate open access versionFindings
  • Chuxu Zhang, Dongjin Song, Chao Huang, Ananthram Swami, and Nitesh V. Chawla. Heterogeneous graph neural network. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), pages 793–803, 2019. 3, 4, and 7.
    Google ScholarLocate open access versionFindings
  • Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. In Advances in Neural Information Processing Systems (NeurIPS) 31, pages 5171–5181. Curran Associates, Inc., 2018. 9.
    Google ScholarLocate open access versionFindings
  • Ruochi Zhang, Yuesong Zou, and Jian Ma. Hyper-{sagnn}: a self-attention based graph neural network for hypergraphs. In International Conference on Learning Representations (ICLR), 2020. 3.
    Google ScholarLocate open access versionFindings
  • Si Zhang, Hanghang Tong, Jiejun Xu, and Ross Maciejewski. Graph convolutional networks: a comprehensive review. Computational Social Networks, 2019. 2.
    Google ScholarLocate open access versionFindings
  • Yubo Zhang, Nan Wang, Yufeng Chen, Changqing Zou, Hai Wan, Xinbin Zhao, and Yue Gao. Hypergraph label propagation networkn. In Proceedings of the Thirty-Fourth Conference on Association for the Advancement of Artificial Intelligence (AAAI), 2020. 3.
    Google ScholarLocate open access versionFindings
  • Z. Zhang, P. Cui, and W. Zhu. Deep learning on graphs: A survey. IEEE Transactions on Knowledge and Data Engineering (TKDE), 2020. 2.
    Google ScholarLocate open access versionFindings
  • Zhao Zhang, Fuzhen Zhuang, Hengshu Zhu, Zhiping Shi, Hui Xiong, and Qing He. Relational graph neural network with hierarchical attention for knowledge graph completion. In Proceedings of the Thirty-Fourth Conference on Association for the Advancement of Artificial Intelligence (AAAI), 2020. 2 and 4.
    Google ScholarLocate open access versionFindings
  • Dengyong Zhou, Jiayuan Huang, and Bernhard Schölkopf. Learning with hypergraphs: Clustering, classification, and embedding. In Proceedings of the 19th International Conference on Neural Information Processing Systems (NeurIPS), page 1601–1608. MIT Press, 2006. 7.
    Google ScholarLocate open access versionFindings
  • Dengyong Zhou, Jiayuan Huang, and Bernhard Schölkopf. Learning with hypergraphs: Clustering, classification, and embedding. In B. Schölkopf, J. C. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems (NeurIPS) 19, pages 1601–1608. MIT Press, 2007. 3.
    Google ScholarLocate open access versionFindings
  • Hongmin Zhu, Fuli Feng, Xiangnan He, Xiang Wang, Yan Li, Kai Zheng, and Yongdong Zhang. Bilinear graph neural network with neighbor interactions. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI), 2020. 5.
    Google ScholarLocate open access versionFindings
Author
Naganand Yadati
Naganand Yadati
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科