Nov 28, 2022
Recently, many subgraph-enhanced graph neural networks emerged, provably boosting the expressive power of standard (message-passing) graph neural networks. However, there is a limited understanding of how these approaches relate to each other and the Weisfeiler-Leman hierarchy. Further, current approaches either use all subgraphs of a given size, sample them uniformly at random, or use hand-crafted heuristics to select them, oblivious to the given data distribution. Here, we offer a unified way to study such architectures by introducing a theoretical framework and extending the known expressivity results of subgraph-enhanced graph neural networks. That is, we show that increasing subgraph size always increases expressive power and develop a better understanding of their limitations by relating them to the established k-𝖶𝖫 hierarchy. In addition, we explore different approaches for sampling subgraphs using state-of-the-art data-driven methods for backpropagating through discrete structures using perturbation-based implicit differentiation. Empirically, we study the predictive performance of different subgraph-enhanced graph neural networks, showing that our data-driven architectures increase prediction accuracy on standard benchmark datasets compared to non-data-driven subgraph-enhanced graph neural networks while vastly reducing computation time.
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Presentations on similar topic, category or speaker