University of Cambridge > Talks.cam > Cambridge Image Analysis Seminars > Graph Neural Networks Use Graphs When They Shouldn’t

Graph Neural Networks Use Graphs When They Shouldn’t

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Ferdia Sherry.

Predictions over graphs play a crucial role in various domains, including social networks and medicine. Graph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data. Although a graph-structure is provided as input to the GNN , in some cases the best solution can be obtained by ignoring it. While GNNs have the ability to ignore the graph-structure in such cases, it is not clear that they will. In this talk, I will show that GNNs actually tend to overfit the given graph-structure. Namely, they use it even when a better solution can be obtained by ignoring it. By analyzing the implicit bias of gradient-descent learning of GNNs I will show that when the ground truth function does not use the graphs, GNNs are not guaranteed to learn a solution that ignores the graph, even with infinite data. I will prove that within the family of regular graphs, GNNs are guaranteed to extrapolate when learning with gradient descent. Then, based on our empirical and theoretical findings, I will demonstrate on real-data how regular graphs can be leveraged to reduce graph overfitting and enhance performance.
Finally, I will present a recent novel approach, Cayley Graph Propagation, for propagating information over special types of regular graphs – the Cayley graphs of the SL(2, Zn) special linear group, to improve overfitting and information bottlenecks.

This talk is part of the Cambridge Image Analysis Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity