University of Cambridge > Talks.cam > Computational Neuroscience > The Continual Learning (CL) Theory

The Continual Learning (CL) Theory

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact .

The Continual Learning (CL) Theory journal club meeting next Tuesday will focus on the following three papers (updated also at the JC document): Introduction will loosely follow the review paper Continual task learning in natural and artificial agents. 1. Continual Learning Through Synaptic Intelligence : – This paper presents Synaptic Intelligence, a method allowing CL by assigning importance to synapses which is partially theoretically tractable. 2. A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix – This paper provides a theoretical analysis of catastrophic forgetting in wide neural networks under vanilla gradient descent, as well as its CL-friendly variants. 3. How catastrophic can catastrophic forgetting be in linear regression? – This paper establishes bounds on forgetting in linear regression with task repetitions using projection theory. 4. Order parameters and phase transitions of continual learning in deep neural networks – This paper offers a statistical mechanics framework for both retrograde and anterograde effects in a teacher-student setting.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity