University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > The Interpretability of Graph Neural Networks

The Interpretability of Graph Neural Networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Pietro Lio.

Join Zoom Meeting https://zoom.us/j/99166955895?pwd=SzI0M3pMVEkvNmw3Q0dqNDVRalZvdz09

Graph neural networks (GNNs) have demonstrated great performance on graph-based data. However, like many machine learning models, GNNs lack transparency and are not interpretable. Due to a lack of trust, practitioners may be reluctant to deploy them in high-stake and safety-critical applications in the real world. This motivates us to develop methods for the explainability of GNNs. In this talk, I provide an overview of the current trends in the frontier of this research area, aiming to discuss the general challenges researchers are currently solving. I then present my recent work accepted to AAAI 2023 : an investigation into the behaviour of individual GNN neurons. We find that GNN neurons behave like concept detectors, and can be used to extract insights from the model which align with human intuition. We then use neuron-level concepts to construct global explanations, outperforming the previous state-of-the-art approach in terms of explanation quality.

Join Zoom Meeting https://zoom.us/j/99166955895?pwd=SzI0M3pMVEkvNmw3Q0dqNDVRalZvdz09

This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity