COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Wednesday Seminars - Department of Computer Science and Technology > Theoretical Foundations of Graph Neural Networks
Theoretical Foundations of Graph Neural NetworksAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Ben Karniely. Recent years have seen a surge in research on graph representation learning, including techniques for deep graph embeddings, generalizations of CNNs to graph-structured data, and neural message-passing approaches. These advances in graph neural networks (GNNs) and related techniques have led to new state-of-the-art results in numerous domains: chemical synthesis, vehicle routing, 3D-vision, recommender systems, question answering, continuous control, self-driving and social network analysis. Accordingly, GNNs regularly top the charts on fastest-growing trends and workshops at virtually all top machine learning conferences. But, what even is a GNN ? Quick online searching reveals many different definitions. These definitions may drastically differ (or even use entirely different terminology) depending on the background that the writer is assuming. And this is no coincidence: the concepts that we now attribute to graph neural networks have independently emerged over the past decade(s) from a variety of machine learning directions. In this talk, I will attempt to provide a “bird’s eye” view on GNNs. Following a quick motivation on the utility of graph representation learning, I will derive GNNs from first principles of permutation invariance and equivariance. Through this lens, I will then describe how researchers from various fields (graph embeddings, graph signal processing, probabilistic graphical models, and graph isomorphism testing) arrived—-independently—-at essentially the same concept of a GNN . The talk will be geared towards a generic computer science audience, though some basic knowledge of machine learning with neural networks will be useful. I also hope that seasoned GNN practitioners may benefit from the categorisation I will present. The content is inspired by the work of Will Hamilton, as well as my ongoing work on the categorisation of geometric deep learning, alongside Joan Bruna, Michael Bronstein and Taco Cohen. Link to join: https://cl-cam-ac-uk.zoom.us/j/91253900399?pwd=SU5TNnpYdDlQbzQ4SEVPVWVWa0Nldz09 A recording of this talk is available at the following link: https://www.cl.cam.ac.uk/seminars/wednesday/video/ This talk is part of the Wednesday Seminars - Department of Computer Science and Technology series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCambridge Network Healthcare SIG Innovation Forum MathworksOther talksHonorary Fellows Lecture - Should we trust statistics? The Biology of Eating Room Left Open For Continued Networking Early adversity, brain development, and risk/resilience for mental health across development Neuroscience informed treatments for anxiety and depression Household Transmission of SARS-COV-2: Insights from a Population-based Serological Survey |