University of Cambridge > Talks.cam > Computer Laboratory Wednesday Seminars > Theoretical Foundations of Graph Neural Networks

Theoretical Foundations of Graph Neural Networks

Add to your list(s) Download to your calendar using vCal

  • UserDr Petar Veličković - DeepMind
  • ClockWednesday 17 February 2021, 15:00-16:00
  • HouseOnline.

If you have a question about this talk, please contact Ben Karniely.

Recent years have seen a surge in research on graph representation learning, including techniques for deep graph embeddings, generalizations of CNNs to graph-structured data, and neural message-passing approaches. These advances in graph neural networks (GNNs) and related techniques have led to new state-of-the-art results in numerous domains: chemical synthesis, vehicle routing, 3D-vision, recommender systems, question answering, continuous control, self-driving and social network analysis. Accordingly, GNNs regularly top the charts on fastest-growing trends and workshops at virtually all top machine learning conferences.

But, what even is a GNN ? Quick online searching reveals many different definitions. These definitions may drastically differ (or even use entirely different terminology) depending on the background that the writer is assuming. And this is no coincidence: the concepts that we now attribute to graph neural networks have independently emerged over the past decade(s) from a variety of machine learning directions.

In this talk, I will attempt to provide a “bird’s eye” view on GNNs. Following a quick motivation on the utility of graph representation learning, I will derive GNNs from first principles of permutation invariance and equivariance. Through this lens, I will then describe how researchers from various fields (graph embeddings, graph signal processing, probabilistic graphical models, and graph isomorphism testing) arrived—-independently—-at essentially the same concept of a GNN .

The talk will be geared towards a generic computer science audience, though some basic knowledge of machine learning with neural networks will be useful. I also hope that seasoned GNN practitioners may benefit from the categorisation I will present.

The content is inspired by the work of Will Hamilton, as well as my ongoing work on the categorisation of geometric deep learning, alongside Joan Bruna, Michael Bronstein and Taco Cohen.

Link to join: https://cl-cam-ac-uk.zoom.us/j/91253900399?pwd=SU5TNnpYdDlQbzQ4SEVPVWVWa0Nldz09

A recording of this talk is available at the following link: https://www.cl.cam.ac.uk/seminars/wednesday/video/

This talk is part of the Computer Laboratory Wednesday Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity