BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Artificial Intelligence Research Group Talks (Comp
 uter Laboratory)
SUMMARY:Global Explainability of GNNs via Logic Combinatio
 n of Learned Concepts - Steve Azzolin
DTSTART;TZID=Europe/London:20221125T170000
DTEND;TZID=Europe/London:20221125T180000
UID:TALK192992AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/192992
DESCRIPTION:While instance-level explanation of GNN is a well-
 studied problem with plenty of approaches being de
 veloped\, providing a global explanation for the b
 ehavior of a GNN is much less explored\, despite i
 ts potential in interpretability and debugging. Ex
 isting solutions either simply list local explanat
 ions for a given class\, or generate a synthetic p
 rototypical graph with maximal score for a given c
 lass\, completely missing any combinatorial aspect
  that the GNN could have learned.\nIn this work\, 
 we propose GLGExplainer (Global Logic-based GNN Ex
 plainer)\, the first Global Explainer capable of g
 enerating explanations as arbitrary Boolean combin
 ations of learned graphical concepts. GLGExplainer
  is a fully differentiable architecture that takes
  local explanations as inputs and combines them in
 to a logic formula over graphical concepts\, repre
 sented as clusters of local explanations. \nContra
 ry to existing solutions\, GLGExplainer provides a
 ccurate and human-interpretable global explanation
 s that are aligned with ground-truth explanations 
 (on synthetic data) or match existing domain knowl
 edge (on real-world data). Extracted formulas are 
 faithful to the model predictions\, to the point o
 f providing insights into some occasionally incorr
 ect rules learned by the model\, making GLGExplain
 er a promising diagnostic tool for learned GNNs.\n
 \nhttps://zoom.us/j/99166955895?pwd=SzI0M3pMVEkvNm
 w3Q0dqNDVRalZvdz09
LOCATION:Online (Zoom)
CONTACT:Pietro Lio
END:VEVENT
END:VCALENDAR
