University of Cambridge > Talks.cam > Computational Neuroscience > Computational Neuroscience Journal Club

Computational Neuroscience Journal Club

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Rodrigo Echeveste.

Adrianna Loback will cover:

• A Dynamic Connectome Supports the Emergence of Stable Computational Function of Neural Circuits through Reward-Based Learning

• David Kappel, Robert Legenstein, Stefan Habenschuss, Michael Hsieh and Wolfgang Maass

• eNeuro (in press, 2018)

http://www.eneuro.org/content/eneuro/early/2018/04/02/ENEURO.0301-17.2018.full.pdf

Version of the manuscript with in-line figures:

https://arxiv.org/pdf/1704.04238.pdf

Abstract: Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity, and raise the questions how neural circuits can maintain a stable computational function in spite of these continuously ongoing processes, and what functional uses these ongoing processes might have. Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity