University of Cambridge > > Computational Neuroscience > Probabilistic synapses

Probabilistic synapses

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Máté Lengyel.

Organisms face a hard problem: based on noisy sensory input, they must set a large number of synaptic weights. However, they do not receive enough information in their lifetime to learn the correct, or optimal weights (i.e. the weights that ensure the circuit, system, and ultimately organism function as effectively as possible). Instead, the best they could possibly do is compute a probability distribution over the optimal weights. Based on this observation, we hypothesize that synapses represent probability distribution over weights — in contrast to the widely held belief that they represent point estimates. From this hypothesis, we derive learning rules for supervised, reinforcement and unsupervised learning. This introduces a new feature: the more uncertain the synapse is about its weight, the more plastic it is. This makes intuitive sense: if the uncertainty about a weight is large, new data should strongly influence its value, while if the uncertainty is small, little learning is needed. This hypothesis makes two predictions about how learning rates should vary across synapses and across time. We also introduce a second hypothesis, which is that the more uncertainty there is about a synaptic weight, the more variable it is. More concretely, the PSP amplitude at a given time is a sample from the probability distribution describing the synapse’s uncertainty. This hypothesis makes several predictions, and we present data for one: that variability should increase as the presynaptic firing rate falls.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity