谷歌浏览器插件
订阅小程序
在清言上使用

Learning Hierarchical Representations of Graphs using Neural Network Techniques

user-5d54d98b530c705f51c2fe5a(2017)

引用 0|浏览2
暂无评分
摘要
For a long time, the preferred machine learning algorithms for doing graph classification have been kernel based. The reasoning has been that kernels represent an elegant way to handle structured data that cannot be easily represented using numerical vectors or matrices. An important reason for the success of kernel methods, is the kernel trick, which essentially replaces computing the feature representation, with a call to a kernel function, thus saving computation and memory cost. For some of the most successful kernels in the graph domain however, such as graphlets, this is not feasible, and one must compute the entire feature distribution in order to obtain the kernel. The main motivation for this work is that latent dimension vector representation of network nodes and network itself are helpful in many tasks like node classification, community detection etc. by allowing their direct use in ML algorithms like SVM and Regression techniques. Also, real-world networks like Youtube and Facebook have millions of nodes in their representation graphs which demand scalable solutions to study them. On the other hand, language modelling has successfully exploited the power of Deep Learning in recent years which motivated representation learning for nodes in the graph. In this work, we present a novel method to learn latent vector representations for sub-structures in any large graph and also the graph itself, motivated by current Deep Learning and Graph Kernels advancements. These vector representations encode semantic sub-structure dependencies in continuous vector space, which can then be leveraged in Machine Learning algorithms for …
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要