BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Computational Neuroscience
SUMMARY:Computational Neuroscience Journal Club - Guillaum
 e Hennequin (University of Cambridge)
DTSTART;TZID=Europe/London:20240227T150000
DTEND;TZID=Europe/London:20240227T170000
UID:TALK212731AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/212731
DESCRIPTION:Please join us for our Computational Neuroscience 
 journal club on Tuesday 27th February at 3pm UK ti
 me in the CBL seminar room\n\nThe title is “Altern
 atives to Backpropagation”\, presented by Youjing 
 Yu and Guillaume Hennequin.\n\nSummary:\n\nBackpro
 pagation is one of the most widely-used algorithms
  for training neural networks. However\, despite i
 ts popularity\, there are several arguments agains
 t the use of backpropagation\, one of the most imp
 ortant being its biological implausibility. In thi
 s journal club meeting\, we are going to take a lo
 ok at some alternatives developed to backpropagati
 on.\n\nWe start by digesting the Forward-Forward a
 lgorithm proposed by Geoffrey Hinton [1]. Instead 
 of running one forward pass through the network fo
 llowed by one backward pass as in backpropagation\
 , the Forward-Forward algorithm utilises two forwa
 rd passes\, one with positive\, real data and anot
 her with negative\, fake data. Each layer in the n
 etwork has its own objective function\, which is t
 o generate high “goodness” for positive data and l
 ow “goodness” for negative data. We will dive into
  the working principles of the algorithm\, its eff
 ectiveness on small problems and the associated li
 mitations.\n\nNext\, we will present another cool 
 idea that has been independently re-discovered by 
 several labs\, and was perhaps most cleanly articu
 lated in Meulemans et al.\, NeurIPS 2022. This ide
 a phrases learning as a least-control problem: a f
 eedback control loop is set up that continuously k
 eeps the learning system (e.g. neural network) in 
 a state of minimum loss\, and learning becomes the
  problem of progressively doing away with controls
 . As it turns out\, gradient information is availa
 ble in the control signals themselves\, such that 
 learning becomes local. We will give a general int
 roduction and history of this idea\, and look into
  Meulemans et al. in some detail.\n\n[1] Hinton\, 
 Geoffrey. "The forward-forward algorithm: Some pre
 liminary investigations." arXiv preprint arXiv:221
 2.13345 (2022).\n[2] Meulemans\, Alexander\, et al
 . "The least-control principle for local learning 
 at equilibrium." Advances in Neural Information Pr
 ocessing Systems 35 (2022): 33603-33617.
LOCATION:CBL Seminar Room\, Engineering Department\, 4th fl
 oor Baker building
CONTACT:Puria Radmard
END:VEVENT
END:VCALENDAR
