BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Interpretable Neural-Symbolic Concept Reasoning - Pietro Barbiero 
 (University of Cambridge)
DTSTART:20230620T120000Z
DTEND:20230620T130000Z
UID:TALK200470@talks.cam.ac.uk
CONTACT:Mateja Jamnik
DESCRIPTION:     Deep learning methods are highly accurate\, yet their opa
 que decision process prevents them from earning full human trust. Concept-
 based models aim to address this issue by learning tasks based on a set of
  human-understandable concepts. However\, state-of-the-art concept-based m
 odels rely on high-dimensional concept embedding representations which lac
 k a clear semantic meaning\, thus questioning the interpretability of thei
 r decision process. To overcome this limitation\, we propose the Deep Conc
 ept Reasoner (DCR)\, the first interpretable concept-based model that buil
 ds upon concept embeddings. In DCR\, neural networks do not make task pred
 ictions directly\, but they build syntactic rule structures using concept 
 embeddings. DCR then executes these rules on meaningful concept truth degr
 ees to provide a final interpretable and semantically-consistent predictio
 n in a differentiable manner. Our experiments show that DCR: (i) improves 
 up to +25% w.r.t. state-of-the-art interpretable concept-based models on c
 hallenging benchmarks (ii) discovers meaningful logic rules matching known
  ground truths even in the absence of concept supervision during training\
 , and (iii)\, facilitates the generation of counterfactual examples provid
 ing the learnt rules as guidance. \n\n"You can also join us on Zoom":https
 ://cl-cam-ac-uk.zoom.us/j/92041617729
LOCATION:zoom only!
END:VEVENT
END:VCALENDAR
