Policy Message Passing: Modeling Trajectories for Probabilistic Graph Inference

semanticscholar(2021)

引用 0|浏览0
暂无评分
摘要
Learning to perform flexible reasoning over multiple variables is of fundamental importance for various tasks in machine learning. Graph neural network is an effective framework for building inference processes among variables. A powerful graph-structured neural network architecture should operate on graphs through two core components: (1) complex message functions designed to model relations between nodes; (2) a flexible information aggregation process that executes the reasoning through passing messages. However, despite the efforts on message designs, existing graph neural networks have limited power of systematically modeling flexible reasoning over graphs. In this paper, we propose the Policy Message Passing (PMP) algorithm, which takes a probabilistic perspective and reformulates the whole information aggregation as stochastic sequential processes. PMP is built upon the Variational Inference framework and defines a set of automatic agents that observe the states and history to perform actions over the graphs. Theoretical interpretations are provided to show that our algorithm can achieve improved optimization efficiency, as well as a more effective learning process. Experiments show that our algorithm outperforms baselines by up to 40% on complex reasoning tasks with parameters reduced up to 80% and is more robust to noisy edges on large scale graphs.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要