This version of Talks.cam will be replaced by 1 July 2026, further information is available on the UIS Help Site
 

University of Cambridge > Talks.cam > Foundation AI > Catastrophic Forgetting and Explainable AI in Large-Scale Models for Neuroscience

Catastrophic Forgetting and Explainable AI in Large-Scale Models for Neuroscience

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr. Michail Mamalakis.

This seminar examines the mechanisms of catastrophic forgetting in large-scale AI systems, with particular emphasis on applications in neuroscience. We explore how continual learning on real-world data can lead to knowledge degradation, where sequential training progressively erodes previously acquired representations. Current mitigation approaches such as replay strategies, parameter regularization methods like Elastic Weight Consolidation (EWC), gradient-based protection techniques, and context-dependent learning are discussed in the context of medical and neuroimaging foundation models. Finally, we consider practical and conceptual strategies to reduce forgetting and support stable, long-term learning in large neuroscience models.

This talk is part of the Foundation AI series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2026 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity