COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
Title to be confirmedAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Puria Radmard. Please join us for our Computational Neuroscience journal club on Tuesday 27th February at 3pm UK time in the CBL seminar room The title is “Alternatives to Backpropagation”, presented by Youjing Yu and Guillaume Hennequin. Summary: Backpropagation is one of the most widely-used algorithms for training neural networks. However, despite its popularity, there are several arguments against the use of backpropagation, one of the most important being its biological implausibility. In this journal club meeting, we are going to take a look at some alternatives developed to backpropagation. We start by digesting the Forward-Forward algorithm proposed by Geoffrey Hinton [1]. Instead of running one forward pass through the network followed by one backward pass as in backpropagation, the Forward-Forward algorithm utilises two forward passes, one with positive, real data and another with negative, fake data. Each layer in the network has its own objective function, which is to generate high “goodness” for positive data and low “goodness” for negative data. We will dive into the working principles of the algorithm, its effectiveness on small problems and the associated limitations. Next, we will present another cool idea that has been independently re-discovered by several labs, and was perhaps most cleanly articulated in Meulemans et al., NeurIPS 2022. This idea phrases learning as a least-control problem: a feedback control loop is set up that continuously keeps the learning system (e.g. neural network) in a state of minimum loss, and learning becomes the problem of progressively doing away with controls. As it turns out, gradient information is available in the control signals themselves, such that learning becomes local. We will give a general introduction and history of this idea, and look into Meulemans et al. in some detail. [1] Hinton, Geoffrey. “The forward-forward algorithm: Some preliminary investigations.” arXiv preprint arXiv:2212.13345 (2022). [2] Meulemans, Alexander, et al. “The least-control principle for local learning at equilibrium.” Advances in Neural Information Processing Systems 35 (2022): 33603-33617. This talk is part of the Computational Neuroscience series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsnaijabang talk Cambridge Image Analysis Seminars Glanville Study DayOther talksThe acceleration of cultural evolution: computational approaches Faith and Finance in the Early Modern World: The capital market of Manila and the financing of the Pacific Trade, 1668-1820 How a ledger became a central bank: a monetary history of the Bank of Amsterdam. 2024 Max Perutz Lecture: Antisense Modulation of RNA Splicing for Rare Disease Therapy - In Person Only Ben Tutolo on Mars Geochemistry Polynomial-Time Pseudodeterministic Construction of Primes |