What graph neural networks cannot learn: depth vs width

ICLR, 2020.

Cited by: 6|Views442
EI
Weibo:
This paper studies the expressive power of message-passing graph neural networks

Abstract:

This paper studies the expressive power of graph neural networks falling within the message-passing framework (GNNmp). Two results are presented. First, GNNmp are shown to be Turing universal under sufficient conditions on their depth, width, node attributes, and layer expressiveness. Second, it is discovered that GNNmp can lose a signifi...More

Code:

Data:

Your rating :
0

 

Tags
Comments