COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Computational Neuroscience > Computational Neuroscience Journal Club
Computational Neuroscience Journal ClubAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Jake Stroud. Please join us for our fortnightly journal club online via zoom where two presenters will jointly present a topic together. The next topic is ‘distributed distributional codes’ presented by Yul Kang and Jonathan So. Zoom information: https://us02web.zoom.us/j/84958321096?pwd=dFpsYnpJYWVNeHlJbEFKbW1OTzFiQT09 It is clear from behavioural studies in a variety of settings that humans are able to not only take uncertainty into account, but to do so in a near Bayes-optimal fashion. What is less clear is how the brain represents uncertainty, or how it performs computations with such representations. One competing theory is that the brain uses Distributed Distributional Codes (DDC) to represent probability distributions over quantities of interest. The DDC shares similarities with other schemes that encode distributions in a population of neurons, however, the DDC has some particularly appealing properties with regard to performing computations for downstream tasks. In this journal club we will begin with an overview of DDC representations, and proceed to look at two specific applications of the DDC ; it’s application to inference and learning in hierarchical latent variable models, and how it can be used to learn successor representations to allow efficient and flexible reinforcement learning and planning in noisy, partially observable environments. 1. Vertes, E. & Sahani, M. (2018) Flexible and accurate inference and learning for deep generative models. Advances in Neural Information Processing Systems, 4166-4175. https://proceedings.neurips.cc/paper/2018/file/955cb567b6e38f4c6b3f28cc857fc38c-Paper.pdf 2. Vertes, E. & Sahani, M. (2019) A neurally plausible model learns successor representations in partially observable environments. Advances in Neural Information Processing Systems, 13714-13724. http://papers.neurips.cc/paper/9522-a-neurally-plausible-model-learns-successor-representations-in-partially-observable-environments.pdf This talk is part of the Computational Neuroscience series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsRoyal Institute of Philosophy - Annual Public Lectures - Anglia Ruskin University Cambridge Tax Discussion Group Chemical Engineering Research Theme Journal ClubsOther talksCSAR webinar: Nanomanufacturing, batteries and the energy transition. Human time vs. mouse time with recapitulated systems. The origin of mitochondrial DNA mutations: population genetics and disease Vijay Rathinam, Title: Noncanonical Sensing of LPS: Mechanisms and Outcomes. and Thiru Kanneganti, Title: Targeting PANoptosis for the Treatment of Inflammatory and Infectious Diseases. The small and mighty: the role of microbes and minerals in melting the Greenland Ice Sheet [Cancelled] Languages of Emergency, Infrastructures of Response and Everyday Heroism in the Circumpolar North |