University of Cambridge > Talks.cam > Computational Neuroscience > Computational Neuroscience Journal Club

Computational Neuroscience Journal Club

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Jake Stroud.

Please join us for our fortnightly Computational Neuroscience journal club on Tuesday 30th May at 2pm UK time in the CBL seminar room, or online on zoom. The title is ‘Dendritic computations and backpropagation’, presented by Will Greedy from the University of Bristol.

https://eng-cam.zoom.us/j/84204498431?pwd=Um1oU284b1YxWThObGw4ZU9XZitWdz09 Meeting ID: 842 0449 8431 Passcode: 684140

Summary: The error-backpropagation (backprop) algorithm remains the most common solution to the credit assignment problem in artificial neural networks. In neuroscience, it is unclear whether the brain could adopt a similar strategy to correctly modify its synapses. Recent models have attempted to bridge this gap while being consistent with a range of experimental observations.

In this journal club, I will:

1. Introduce the general field of backprop in dendritic networks.

2. Go over Error-encoding Dendritic Networks (EDNs; Sacramento et al. 2018 NeurIPS). In this paper, it was first introduced the idea of using cell-types and distal dendrites to jointly encode error signals.

3. Next, I will introduce Burstprop (Payeur et al. Nature Neuro 2021), which proposes that there are two types of signals in the brain. Single-spike events for inference and bursts for learning. This then suggests the need for specialised short-term synaptic plasticity with which to decode these signals.

4. Both EDNs and burstprop are either unable to effectively backpropagate error signals across multiple layers or require a multi-phase learning process, neither of which are reminiscent of learning in the brain. Next, I will introduce our recent model, Bursting Cortico-Cortical Networks (BurstCCN; Greedy et al. Neurips 2022), which solves these issues by integrating known properties of cortical networks namely bursting activity, short-term plasticity (STP) and dendrite-targeting interneurons.

Overall, these results suggest that cortical features across sub-cellular, cellular, microcircuit, and systems levels jointly underlie single-phase efficient deep learning in the brain.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity