University of Cambridge > > Computational Neuroscience > Mapping affective decisions in depression using reinforcement learning tools

Mapping affective decisions in depression using reinforcement learning tools

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Máté Lengyel.

Most decision problems the brain faces are fantastically hard. I will describe three different general solution strategies that are well described at neural, behavioural and computational level, and show how each of these can elucidate important aspects of decision making in depression. First, concentrating on habitual learning, we will apply a simple TD-like learning model to a large dataset of 392 subjects in a simple asymmetrically rewarded decision making task (due to Diego Pizzagalli). This allows for a detailed analysis relating the effect of dopamine, stress and depression to the processing of rewards. Second, we will present a Bayesian model of learned helplessness. We show that the prior belief about the extent to which the environment is controllable has profound impacts on goal-directed choice behaviour, and that this is related specifically to psychometric measures of hopelessness in subjects suffering from recurrent depression. Finally, we will discuss the role of serotonin in behavioural inhibition. Computationally separating the effects of behavioural inhibition during learning and later during behaviour allows for insights into the prozac paradox: the fact that 5HTTLPR polymorphisms and SSR Is seem to do the same yet have opposite effects on depression. Furthermore, considering the relationship between serotonin and innate, evolutionarily acquired Pavlovian responses hints at a possible reason for the co-morbidities between mood and anxiety disorders.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity