BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:id366's list
SUMMARY:Sign and Basis Invariant Networks for Spectral Gra
 ph Representation Learning - Joshua Robinson\, MIT
  
DTSTART;TZID=Europe/London:20221012T163000
DTEND;TZID=Europe/London:20221012T173000
UID:TALK180422AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/180422
DESCRIPTION:Eigenvectors computed from data arise in various s
 cenarios including principal component analysis\, 
 and matrix factorizations. Another key example is 
 the eigenvectors of the graph Laplacian\, which en
 code information about the structure of a graph or
  manifold. An important recent application of Lapl
 acian eigenvector is to graph positional encodings
 \, which have been used to develop more powerful g
 raph architectures. However\, eigenvectors have sy
 mmetries that should be respected by models taking
  eigenvector inputs: (i) sign flips\, since if v i
 s an eigenvector then so is -v\; and (ii) more gen
 eral basis symmetries\, which occur in higher dime
 nsional eigenspaces with infinitely many choices o
 f basis eigenvectors. We introduce SignNet and Bas
 isNet---new neural network architectures that are 
 sign and basis invariant. We prove that our networ
 ks are universal\, i.e.\, they can approximate any
  continuous function of eigenvectors with the desi
 red invariances. Moreover\, when used with Laplaci
 an eigenvectors\, our architectures are provably e
 xpressive for graph representation learning: they 
 can approximate—and go beyond—any spectral graph c
 onvolution\, and can compute spectral invariants t
 hat go beyond message passing neural networks. Exp
 eriments show the strength of our networks for mol
 ecular graph regression\, learning expressive grap
 h representations\, and more.
LOCATION:Lecture Theater 1
CONTACT:Iulia Duta
END:VEVENT
END:VCALENDAR
