Probabilistic Learning on Graphs via Contextual Architectures
JOURNAL OF MACHINE LEARNING RESEARCH(2020)
摘要
We propose a novel methodology for representation learning on graph-structured data, in which a stack of Bayesian Networks learns different distributions of a vertex's neighbourhood. Through an incremental construction policy and layer-wise training, we can build deeper architectures with respect to typical graph convolutional neural networks, with benefits in terms of context spreading between vertices. First, the model learns from graphs via maximum likelihood estimation without using target labels. Then, a supervised readout is applied to the learned graph embeddings to deal with graph classification and vertex classification tasks, showing competitive results against neural models for graphs. The computational complexity is linear in the number of edges, facilitating learning on large scale data sets. By studying how depth affects the performances of our model, we discover that a broader context generally improves performances. In turn, this leads to a critical analysis of some benchmarks used in literature.
更多查看译文
关键词
Structured domains,deep graph networks,graph neural networks,deep learning,maximum likelihood,graph classification,node classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络