BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Isaac Newton Institute Seminar Series
SUMMARY:Learning and Extrapolation in Graph Neural Network
 s - Stefanie Jegelka (Massachusetts Institute of T
 echnology)
DTSTART;TZID=Europe/London:20211123T120000
DTEND;TZID=Europe/London:20211123T130000
UID:TALK164854AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/164854
DESCRIPTION:Graph Neural Networks (GNNs) have become a popular
  tool for learning representations of graph-struct
 ured inputs\, with applications in computational c
 hemistry\, recommendation\, pharmacy\, reasoning\,
  and many other areas. In this talk\, I will show 
 some recent results on learning with message-passi
 ng GNNs. In particular\, GNNs possess important in
 variances and inductive biases that affect learnin
 g and generalization. Studying the effect of these
  inductive biases can be challenging\, as they are
  affected by the architecture (structure and aggre
 gation functions) and training algorithm and inter
 play with data and learning task. In particular\, 
 we study these biases for learning structured task
 s\, e.g.\, simulations or algorithms\, and show ho
 w for such tasks\, architecture choices affect gen
 eralization within and outside the training distri
 bution.\nThis talk is based on joint work with Key
 ulu Xu\, Jingling Li\, Mozhi Zhang\, Simon S. Du a
 nd Ken-ichi Kawarabayashi.
LOCATION:Seminar Room 1\, Newton Institute
CONTACT:
END:VEVENT
END:VCALENDAR
