COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Computational and Biological Learning Seminar Series > Optimal integration of top-down and bottom-up uncertainty in humans, monkeys, and neural networks
Optimal integration of top-down and bottom-up uncertainty in humans, monkeys, and neural networksAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Zoubin Ghahramani. Hearing the sound of rustling leaves in the forest might indicate the presence of a predator or just be the effect of the wind. There are two difficulties in determining the source of the sound. First, both sources can cause similar sounds, and second, the sound is corrupted by sensory noise. In this common decision paradigm, uncertainty associated with previous knowledge (top-down uncertainty) is combined with uncertainty associated with sensory information from the environment (bottom-up uncertainty). The capability of the brain to make decisions under both types of uncertainty is key for survival. We studied this kind of decision-making in both humans and macaque monkeys using an orientation classification task. Stimuli were oriented patterns whose orientations were drawn from either a narrow or a wide normal distribution, with the same means. Each distribution defined a class. Top-down uncertainty stemmed from the overlap of the two distributions. Bottom-up uncertainty was manipulated through the contrast of the stimulus. The Bayes-optimal observer in this task chooses, on each trial, the class with the highest posterior probability. Computing the posterior is nontrivial because marginalization over stimulus orientation is required. The optimal strategy amounts to using a decision criterion that varies according to the trial-by-trial bottom-up uncertainty. Bayesian model comparison revealed that this model describes human and monkey data better than models with non-optimal criteria. We proceeded to construct a neural network that behaves like the optimal observer under biologically plausible forms of neural variability. Within the framework of Poisson-like population codes, we trained neural networks with different types of operations to approximate the optimal posterior over class. A network with divisive normalization operations was sufficient to perform well in this non-trivial task. Our results demonstrate that humans, monkeys, and neural networks can optimally integrate top-down and bottom-up uncertainty. This talk is part of the Computational and Biological Learning Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsNational Biology Week talks health economics Sustainable Energy - One-day Meeting of the Cambridge Philosophical SocietyOther talksMigration in Science Lung Cancer. Part 1. Patient pathway and Intervention. Part 2. Lung Cancer: Futurescape Cohomology of the moduli space of curves Breast cancer - demographics, presentation, diagnosis and patient pathway Uncertainty Quantification of geochemical and mechanical compaction in layered sedimentary basins Dive into the Lives of Flies and Ants Atiyah Floer conjecture Towards a whole brain model of perceptual learning The Rise of Augmented Intelligence in Edge Networks Speculations about homological mirror symmetry for affine hypersurfaces Predictive modeling of hydrogen assisted cracking – a Micromechanics conquest |