This version of Talks.cam will be replaced by 1 July 2026, further information is available on the UIS Help Site
 

University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > Neural Symbolic Interpretability

Neural Symbolic Interpretability

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mateja Jamnik.

Notice unusual day and time

Neuro-symbolic (NeSy) interpretability provides a formal language for controlling deep neural networks (DNNs) and ensuring that their behavior satisfies human desiderata. We will present the architectural conditions that enable NeSy control in DNNs and, based on these conditions, introduce a general blueprint for instantiating NeSy-interpretable reasoners. We illustrate this paradigm using two representative examples: verifiable and causally transparent concept-based models.

Pietro is a Swiss Postdoctoral Fellow and Ellis member at IBM Research. Previously, he was a postdoc at Universita’ della Svizzera Italiana and received his PhD at the University of Cambridge. His research focuses on the mathematical foundations of interpretability and on developing causally transparent models to go beyond the current accuracy-interpretability trade-off.

This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2026 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity