University of Cambridge > > Computational and Biological Learning Seminar Series > Optimal integration of top-down and bottom-up uncertainty in humans, monkeys, and neural networks

Optimal integration of top-down and bottom-up uncertainty in humans, monkeys, and neural networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Zoubin Ghahramani.

Hearing the sound of rustling leaves in the forest might indicate the presence of a predator or just be the effect of the wind. There are two difficulties in determining the source of the sound. First, both sources can cause similar sounds, and second, the sound is corrupted by sensory noise. In this common decision paradigm, uncertainty associated with previous knowledge (top-down uncertainty) is combined with uncertainty associated with sensory information from the environment (bottom-up uncertainty). The capability of the brain to make decisions under both types of uncertainty is key for survival. We studied this kind of decision-making in both humans and macaque monkeys using an orientation classification task. Stimuli were oriented patterns whose orientations were drawn from either a narrow or a wide normal distribution, with the same means. Each distribution defined a class. Top-down uncertainty stemmed from the overlap of the two distributions. Bottom-up uncertainty was manipulated through the contrast of the stimulus. The Bayes-optimal observer in this task chooses, on each trial, the class with the highest posterior probability. Computing the posterior is nontrivial because marginalization over stimulus orientation is required. The optimal strategy amounts to using a decision criterion that varies according to the trial-by-trial bottom-up uncertainty. Bayesian model comparison revealed that this model describes human and monkey data better than models with non-optimal criteria. We proceeded to construct a neural network that behaves like the optimal observer under biologically plausible forms of neural variability. Within the framework of Poisson-like population codes, we trained neural networks with different types of operations to approximate the optimal posterior over class. A network with divisive normalization operations was sufficient to perform well in this non-trivial task. Our results demonstrate that humans, monkeys, and neural networks can optimally integrate top-down and bottom-up uncertainty.

This talk is part of the Computational and Biological Learning Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2019, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity