The successor representation, its neural substrate, and behavioural consequences.
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact .
Breaking the mould of purely model free (MF) or model based (MB) reinforcement learning methods, the successor representation (SR) (Dayan, 1993) is a unique factorisation of the value function that bridges MB and MF approaches. At the start of the talk, Puria Radmard will discuss the mathematical formalism behind the SR, and provide a live demo of how such a representation is iteratively learned. In the second part, Daniel Kornai will present two papers. In “The hippocampus as a predictive map” (Stachenfeld et. al 2017 Nature Neuroscience), the authors show how many properties of place fields and grid fields can be recapitulated by a model that assumes that place cells encode the SR, and grid cells encode a low dimensional representation of the SR. In “The successor representation in human reinforcement learning” (Momennejad et. al 2017 Nature Human Behaviour), the authors show how human performance under continual reinforcement learning tasks is most consistent with a hybrid SR model.
This talk is part of the Computational Neuroscience series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|