University of Cambridge > Talks.cam > Cambridge Language Sciences > Neural entrainment to higher-level features of speech reflects the discrete nature of perception in the auditory system

Neural entrainment to higher-level features of speech reflects the discrete nature of perception in the auditory system

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Matt.Davis.

Recent research suggests that the visual system does not continuously monitor the environment, but rather samples it, cycling between ‘snapshots’ at discrete moments in time. Interestingly, most attempts at discovering analogous mechanisms in the auditory system failed, indicating crucial differences between the visual and auditory system. In this talk, I will hypothesize why this could be the case and present data that keep alive the idea of discrete stimulus processing in the auditory system: Firstly, I show that auditory recognition is clearly more robust to subsampling on a higher level of auditory representation than when subsampling is applied to the very input to the system. Secondly, I present evidence that neural oscillations, the subsampling tool for the brain, do not only passively follow the rhythmic changes in spectral content of speech sound – rather, they actively adjust to higher-level features of speech, and this adjustment has perceptual consequences. These findings suggest that, if discrete sampling exists in the auditory system, it should operate on a higher level of auditory processing, and potentially in a flexible fashion.

This talk is part of the Cambridge Language Sciences series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2017 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity