BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Superposition in GNNs - Lukas Pertl
DTSTART:20250512T160000Z
DTEND:20250512T164500Z
UID:TALK230206@talks.cam.ac.uk
CONTACT:Pietro Lio
DESCRIPTION:Existing mechanistic interpretability efforts seem severely th
 reatened by superposition\, an effect in which a neural network represents
  more “features” than it has neurons. Previous papers have used toy mo
 dels with MLP architectures to study both representational superposition (
 caused by passing higher dimensional data through a lower dimensional hidd
 en layer) and computation in superposition. Here\, for the first time\, to
 y models are used to study how superposition arises in graph neural networ
 ks (GNNs). We demonstrate\, (i) that superposition in GNNs can arise simil
 arly to MLPs through compression\, yet different aggregation functions dis
 tinctly impact this phenomenon\, with max pooling notably discouraging sup
 erposition\; (ii) we find that the inherent topology of graphs enables the
  construction of toy models where superposition arises even in the absence
  of compression and we discuss the algorithms the model finds to do this\;
  (iii) we identify that graph isomorphism networks (GIN) can lead to the e
 mergence of superposition within a lower-dimensional subspace of a larger 
 embedding\, suggesting that superposition inadvertently creates metastable
  minima\; and (iv) we look at how superposition emerges in real life binar
 y classification datasets.  
LOCATION:Lecture Theatre 2\, Computer Laboratory\, William Gates Building
END:VEVENT
END:VCALENDAR
