University of Cambridge > Talks.cam > Computational Neuroscience > Neuronal processing of continuous sensory streams

Neuronal processing of continuous sensory streams

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Guillaume Hennequin.

During behavior, a continuous stream of sensory information reaches the central nervous system in the form of a high-dimensional spatio temporal pattern of action potentials. When processing such activity, many sensory neurons respond with high selectivity and precise tuning to behaviorally meaningful sensory cues, such as sounds within a communication call or shapes within an evolving visual scene. Typically, the temporal extent of such embedded features can be orders of magnitude shorter than the duration of the encompassing behavioral episodes. It is unclear how neurons can bridge between these time scales when learning to signal to downstream processing stages the presence and quality of individual perceptual features in an evolving sensory stream. It is commonly hypothesized that such learning must rely on temporal segmentation of the sensory streams or other types of supervisory signals that provide neurons with information about the timing and values of their target features. In contrast, we show here that an aggregate scalar teaching signal delivered at the and of a long sensory episode is sufficient for biologically plausible neuron models to acquire even complex tuning functions for spatio-temporal patterns of spikes that arrive embedded in continuous streams of spiking background activity. The proposed learning implements a novel form of spike-based synaptic plasticity that reduces the difference between a neuron’s output spike count and the value of the teaching signal. Based on the simplicity of such supervisory signaling we propose a novel type of self-organizing spiking neuronal networks as a model for the emergence of feature selectivity and map formation in sensory pathways. In these two-layer networks, self-organization is driven by a positive feedback loop between a processing and a supervisor layer that computes neuronal teaching signals by taking weighted averages over the processing layer activity. We demonstrate the power of this learning paradigm by implementing a neuronal model of continuous speech processing.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity