COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Computational Neuroscience > Computational Neuroscience Journal Club
Computational Neuroscience Journal ClubAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Jake Stroud. Please join us for our fortnightly journal club online via zoom where two presenters will jointly present a topic together. The next topic is ‘Low rank RNNs’ presented by Yashar Ahmadian and Wayne Soo. Zoom information: https://us02web.zoom.us/j/84958321096?pwd=dFpsYnpJYWVNeHlJbEFKbW1OTzFiQT09 Meeting ID: 841 9788 6178 Passcode: 659046 Biological neural networks have connectivity characterised by ordered structure alongside disorder or heterogeneity that cannot be accounted for by known neural features. The structured part of connectivity often takes the form of a low-rank matrix, while heterogeneity is often modeled by a random matrix with independent elements. On the other hand, tasks of low complexity can be implemented by recurrent neural networks (RNN) which exhibit low-dimensional dynamics, and influential paradigms for training RNNs in such tasks (e.g. FORCE learning and reservoir computing) by construction yield connectivity that is a sum of a random matrix and a low-rank one. The computational benefits of the random component of connectivity (which by itself can lead to chaotic dynamics) during learning or for task-performance is not clear. The first two papers that we will present link the low-rank component of connectivity to low-dimensional dynamics, and use dynamic mean-field theory to systematically map the phase diagram of networks with such connectivity, when the two components are statistically independent vs. correlated, respectively. The third paper studies gradient based training of unrestricted RNNs with random initial connectivity in common neuroscience tasks, and shows that the resulting change in connectivity is low-rank. Moreover, they find a clear benefit for the random initial connectivity in speeding up training, and they provide theoretical insights for this finding by analytically studying learning in linear RNNs. 1) Mastrogiuseppe, F and Ostojic, S, Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks, Neuron 99, 609–623, August 8, 2018. https://www.sciencedirect.com/science/article/pii/S0896627318305439 2) Schuessler, F et al., Dynamics of random recurrent networks with correlated low-rank structure, PHYSICAL REVIEW RESEARCH 2 , 013111 (2020). https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.013111 3) Schuessler, F et al., The interplay between randomness and structure during learning in RNNs, NeurIPS 2020. https://arxiv.org/abs/2006.11036 This talk is part of the Computational Neuroscience series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCompliant and Accountable Systems - Seminars Wright Lecture Series 5th Cambridge Assessment Conference: Challenges of assessment reformOther talksResearch Ecosystems, Cognitive Bias and Incentives Characterising the Atmospheres of Low-Mass Exoplanets with JWST change of room: FW26 An outgassing atmosphere on the Earth-size planet GJ 1132b Texas, and the hunt for the elusive Epithelantha Urban tunneling - the challenges of creating underground space in historic cities |