| COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > Neural Symbolic Interpretability
Neural Symbolic InterpretabilityAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Mateja Jamnik. Notice unusual day and time Neuro-symbolic (NeSy) interpretability provides a formal language for controlling deep neural networks (DNNs) and ensuring that their behavior satisfies human desiderata. We will present the architectural conditions that enable NeSy control in DNNs and, based on these conditions, introduce a general blueprint for instantiating NeSy-interpretable reasoners. We illustrate this paradigm using two representative examples: verifiable and causally transparent concept-based models. Pietro is a Swiss Postdoctoral Fellow and Ellis member at IBM Research. Previously, he was a postdoc at Universita’ della Svizzera Italiana and received his PhD at the University of Cambridge. His research focuses on the mathematical foundations of interpretability and on developing causally transparent models to go beyond the current accuracy-interpretability trade-off. This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCambridge University Mycological Society Cambridge Labour for Clause IV Roadshow Kelvin club Michaelmas talksOther talksCambridge RNA Club - IN PERSON Identifying novel antimicrobial resistance variants in Klebsiella pneumoniae Targeting AD-related pathophysiology with non-invasive brain stimulation Artificial Intelligence Pathways from Weather to Climate Cancelled: History of science for mathmos 11 Agentic AI for Science Workshop Activity |