Talks.cam will close on 1 July 2026, further information is available on the UIS Help Site
 

University of Cambridge > Talks.cam > Computational Neuroscience > From Data to Models to Understanding: Evaluating Neural Latents and Finding Decision Boundaries in Recurrent Neural Networks (RNNs)

From Data to Models to Understanding: Evaluating Neural Latents and Finding Decision Boundaries in Recurrent Neural Networks (RNNs)

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact .

Advances in machine learning have unlocked access to increasingly rich computational models of cognition and its underlying neural dynamics. This richness brings with it several challenges of different kinds. I will discuss two specific challenges and ways to address them. Model evaluation: ensuring that models fit to neural data align with the true underlying dynamics, to which we do not have direct access. I will show how the model’s few-shot generalisation – its ability to predict held-out parts of the data from a few examples – helps quantify this match. This approach selects models that capture the full richness of the data without ‘inventing’ extraneous features. Model analysis: many dynamical models exhibit multistability, marked by decision boundaries (separatrices) in their state space, which are hard to locate—especially in high dimensions. We introduce a Koopman‐theory‐driven neural network that learns a scalar function vanishing on the separatrix, and demonstrate its use on simple systems and RNNs to design optimal perturbations for crossing these boundaries and to make predictions for the outcome of optogenetic stimulations.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity