University of Cambridge > > RCEAL Tuesday Colloquia > Neural foundations of spoken language: word learning and ambiguity resolution

Neural foundations of spoken language: word learning and ambiguity resolution

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Napoleon Katsos.

Understanding spoken language requires a complex series of processing stages to perceive speech sounds, recognise familiar words and combine word meanings for comprehension. I will begin by sketching out an account of the multiple, hierarchically-organised processing streams in the temporal lobe, and convergent connectivity in prefrontal/premotor regions that are involved in understanding speech. From this anatomical starting point, I will present two case studies of how investigations of these neural foundations can contribute to a cognitive and computational understanding of spoken language processing.

The first half of the talk will explore the processes involved in learning new words. Existing findings suggest a two stage process in which phonological representations of novel words are rapidly acquired, but require offline consolidation before they achieve equivalent lexical status as pre-existing familiar words. In behavioural experiments only novel words learnt on the previous day to testing engage in lexical competition and show faster repetition latencies than untrained nonwords. In fMRI, neural responses to novel words showed evidence for overnight consolidation with responses in superior temporal, premotor and cerebellar regions being unmodified by training on the same day as scanning whereas novel words learnt on the previous day show more word-like neural responses. In contrast, elevated responses to novel words in the left hippocampus show rapid habituation and predict the initial acquisition of unfamiliar novel words. These findings suggest an account of word learning in which there is a division of labour between hippocampal and cortical systems involved in the initial acquisition and overnight consolidation of novel spoken words. Biological precedents for this two-stage learning process will be discussed.

The second half of the talk will report evidence from functional imaging investigations of semantic ambiguity resolution. We will first of all provide evidence for a coordinated response of inferior frontal and inferior temporal lobe regions in computing the correct meaning for sentences such as “The shell was fired towards the tank”. This fronto-temporal response to ambiguity provides a neural marker for intact speech comprehension which can be applied to clinical and pharmacological states in which behavioural responses to speech are absent. Dissociations of frontal and temporal lobe responses to speech during sedation with anaesthetic drugs suggest a critical role for frontal regions in supporting the perception and comprehension of speech. In severely brain injured patients (e.g. vegetative and minimally-conscious patients) functional imaging can detect residual speech comprehension that would not have been detected with behavioural responses. Such findings raise important questions concerning the relationship between speech comprehension and conscious awareness.

This talk is part of the RCEAL Tuesday Colloquia series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity