University of Cambridge > Talks.cam > Computational Neuroscience > Title to be confirmed

Title to be confirmed

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Puria Radmard.

Please join us for our Computational Neuroscience journal club on Tuesday 27th February at 3pm UK time in the CBL seminar room

The title is “Alternatives to Backpropagation”, presented by Youjing Yu and Guillaume Hennequin.

Summary:

Backpropagation is one of the most widely-used algorithms for training neural networks. However, despite its popularity, there are several arguments against the use of backpropagation, one of the most important being its biological implausibility. In this journal club meeting, we are going to take a look at some alternatives developed to backpropagation.

We start by digesting the Forward-Forward algorithm proposed by Geoffrey Hinton [1]. Instead of running one forward pass through the network followed by one backward pass as in backpropagation, the Forward-Forward algorithm utilises two forward passes, one with positive, real data and another with negative, fake data. Each layer in the network has its own objective function, which is to generate high “goodness” for positive data and low “goodness” for negative data. We will dive into the working principles of the algorithm, its effectiveness on small problems and the associated limitations.

Next, we will present another cool idea that has been independently re-discovered by several labs, and was perhaps most cleanly articulated in Meulemans et al., NeurIPS 2022. This idea phrases learning as a least-control problem: a feedback control loop is set up that continuously keeps the learning system (e.g. neural network) in a state of minimum loss, and learning becomes the problem of progressively doing away with controls. As it turns out, gradient information is available in the control signals themselves, such that learning becomes local. We will give a general introduction and history of this idea, and look into Meulemans et al. in some detail.

[1] Hinton, Geoffrey. “The forward-forward algorithm: Some preliminary investigations.” arXiv preprint arXiv:2212.13345 (2022). [2] Meulemans, Alexander, et al. “The least-control principle for local learning at equilibrium.” Advances in Neural Information Processing Systems 35 (2022): 33603-33617.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity