University of Cambridge > Talks.cam > Computational Neuroscience > Towards a theory of layered neural circuit architectures

Towards a theory of layered neural circuit architectures

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Daniel McNamee.

A main challenge in neuroscience is finding a general computational principle that explains why cortical circuits are organized in particular structures. I will start out with the optimal storage principle as a guideline to derive optimal neural architecture. For optimal storage, one needs to have the maximal capacity of a neural network and a learning rule to achieve the capacity. For conventional recurrent neural networks, the maximal capacity is known as the Gardner bound, and this bound is achieved via the Three-Threshold Learning Rule (3TLR). However, calculating the storage capacity of hierarchical neural circuits has been problematic. I will present my recent results suggesting that the capacity of an expansive autoencoder increases superlinearly with the expansion ratio using simulations and Gardner’s replica theory. I will discuss some of the theoretical challenges and limitations of these networks.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity