BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Multi-Head Explainer: A General Framework to Improve Explainabilit
 y in CNNs and Transformers - Bohang Sun(Bob) University of Electronic Scie
 nce and Technology of China
DTSTART:20250117T150000Z
DTEND:20250117T160000Z
UID:TALK225973@talks.cam.ac.uk
CONTACT:Pietro Lio
DESCRIPTION:In this study\, we introduce the Multi-Head Explainer (MHEX)\,
  a versatile and modular framework that enhances both the explainability a
 nd accuracy of Convolutional Neural Networks (CNNs) and Transformer-based 
 models. MHEX consists of three core components: an Attention Gate that dyn
 amically highlights task-relevant features\, Deep Supervision that guides 
 early layers to capture fine-grained details pertinent to the target class
 \, and an Equivalent Matrix that unifies refined local and global represen
 tations to generate comprehensive saliency maps. Our approach demonstrates
  superior compatibility\, enabling effortless integration into existing re
 sidual networks like ResNet and Transformer architectures such as BERT wit
 h minimal modifications. Extensive experiments on benchmark datasets in me
 dical imaging and text classification show that MHEX not only improves cla
 ssification accuracy but also produces highly interpretable and detailed s
 aliency scores. \n
LOCATION:Lecture Theatre 2\, Computer Laboratory\, William Gates Building
END:VEVENT
END:VCALENDAR
