From Node Embedding to Graph Embedding: Scalable Global Graph Kernel via Random Features

Neural Information Processing Systems Workshop on Relational Representation Learning (NeuIPS 2018 WS)(2018)

引用 1|浏览30
暂无评分
摘要
Graph kernels are one of the most important methods for graph data analysis and have been successfully applied in diverse applications. We can generally categorize existing graph kernels into two groups: kernels based on local sub-structures, and kernels based on global properties. The first line of research compares sub-structures of graphs such as random walks [1], shortest paths [2], and graphlets [3]. Specifically, these kernels recursively decompose the graphs into small sub-structures, and then define a feature map over these sub-structures for the resulting graph kernel. However, the aforementioned approaches only consider local patterns rather than global properties, which may substantially limit effectiveness in some applications. Equally importantly, most of these graph kernels scale poorly to large graphs due to their at-least-quadratic complexity in the number of graphs and cubic complexity in the size of each graph.Another family of research is the use of geometric embeddings of graph nodes to capture global properties, which has shown great promise, achieving state-of-the-art performance in graph classification [4–8]. Unfortunately, these global kernel methods do not yield a valid positive-definite (pd) kernel and thus delivers a serious blow to hopes of using kernel support machine. Two recent graph kernels, the multiscale laplacian kernel [8] and optimal assignment kernel [7] were developed to overcome these limitations by building a pd kernel between node distributions or through histogram intersection. However, the majority of these approaches have at least quadratic complexity in terms of either the number of graph …
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要