University of Cambridge > Talks.cam > Computational Neuroscience > Decoding of complex stimuli from large retinal populations

Decoding of complex stimuli from large retinal populations

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Guillaume Hennequin.

Decoding of complex stimuli from the retinal activity remains an open challenge. To date, experiments have focused on decoding either a small number of discrete stimuli (e.g. decoding among several possible orientations of a drifting grating), or very low dimensional dynamical traces (e.g. luminance in a full field flicker experiment). In collaboration with experimentalists (group of O. Marre at the Vision Institute, Paris), we have implemented different frameworks for the decoding of rich dynamical stimuli at high level of spatial detail from the population activity of a rat’s retina. Linear decoding frameworks achieve good decoding performance. Furthermore, we show that, in some instances, methods taking advantage of nonlinear features of the population activity can decode significantly better than linear. Therefore, our work on decoding complements encoding studies to provide novel and practical insights into the organization of the neural code.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity