University of Cambridge > Talks.cam > Computational Neuroscience > Computational Neuroscience Journal Club

Computational Neuroscience Journal Club

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Daniel McNamee.

Alberto Bernacchia will cover:

Recurrent neural networks operating in the near-chaotic regime exhibit complex dynamics, reminiscent of neural activity in higher cortical areas. As a result, these networks have been proposed as models of cortical computation during cognitive tasks. However, existing methods for training the connectivity of these networks are either biologically implausible, and/or require an instantaneous, real-time continuous error signal to guide the learning process. The lack of plausible learning method may restrict the applicability of recurrent neural networks as models of cortical computation. Here we introduce a biologically plausible learning rule that can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. We use this method to learn various tasks from the experimental literature, showing that this learning rule can successfully implement flexible associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The trained networks exhibit complex dynamics previously observed in animal cortex, such as dynamic encoding and maintenance of task features, switching from stimulus-specific to response-specific representations, and selective integration of relevant input streams. We conclude that recurrent neural networks can offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity