What graph neural networks cannot learn: depth vs width
ICLR, 2020.
EI
Weibo:
Abstract:
This paper studies the expressive power of graph neural networks falling within the message-passing framework (GNNmp). Two results are presented. First, GNNmp are shown to be Turing universal under sufficient conditions on their depth, width, node attributes, and layer expressiveness. Second, it is discovered that GNNmp can lose a signifi...More
Tags
Comments