Catastrophic Forgetting and Explainable AI in Large-Scale Models for Neuroscience
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Dr. Michail Mamalakis.
This seminar examines the mechanisms of catastrophic forgetting in large-scale AI systems, with particular emphasis on applications in neuroscience. We explore how continual learning on real-world data can lead to knowledge degradation, where sequential training progressively erodes previously acquired representations. Current mitigation approaches such as replay strategies, parameter regularization methods like Elastic Weight Consolidation (EWC), gradient-based protection techniques, and context-dependent learning are discussed in the context of medical and neuroimaging foundation models. Finally, we consider practical and conceptual strategies to reduce forgetting and support stable, long-term learning in large neuroscience models.
This talk is part of the Foundation AI series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|