University of Cambridge > Talks.cam > Zangwill Club > From ears to brain (and back): Imaging the brain computations for sound analysis.

From ears to brain (and back): Imaging the brain computations for sound analysis.

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Louise White.

A friend speaking, a bird chirping, a piano playing. Any living being or non-living object that vibrates generates acoustic waveforms, which we call sounds. How does our brain transform these acoustic vibrations into meaningful percepts? This lecture illustrates current research based on functional magnetic resonance imaging (fMRI) aimed at discovering the computations the brain performs to achieve this amazing feat.

In a first part, I will present research combining high resolution fMRI with computational modelling aiming at revealing how natural sounds are encoded in auditory cortex, the part of the brain most relevant for the processing of sounds. Results show that in humans (as well as in macaque monkeys) the cortical encoding of natural sounds entails the simultaneous formation of multiple representations with different degrees of spectral and temporal detail. This multi-resolution analysis of sounds may be crucially relevant for enabling flexible and context-dependent processing of the sounds, in the highly dynamic everyday environment. Analyses of cross-species differences between human and macaque monkeys suggest that – in the human cortex alone – even “general purpose” cortical mechanisms of sound analysis are shaped by the characteristic acoustic properties of speech. In a second part, I will show how the high spatial resolution (< 1 mm) and specificity achievable with new fMRI techniques at ultra-high magnetic fields (7 and 9.4 Tesla) opens up the possibility to examine “unknown” territories in humans, such as the columnar and laminar architecture in (primary) auditory cortical areas. Finally, I will elaborate on the potential and challenges of combining computational modeling and laminar fMRI to study relevant neuro-computational mechanisms in human auditory cortex.

Elia Formisano received in 2000 his PhD in Bioengineering from the national (Italian) program. In 1998-1999, he was a research fellow at the Max Planck Institute for Brain Research in Frankfurt/Main (with Dr. Rainer Goebel/Prof. Wolf Singer). In January 2000, he was appointed Assistant Professor at Maastricht University (Faculty of Psychology and Neuroscience) where he is now Full Professor of Neuroimaging Methods at the Department of Cognitive Neuroscience. He is Scientific Director of the Maastricht Brain Imaging Centre (MBIC) and core member of the Maastricht Centre for Systems Biology (MaCSBio). In 2000-2002, he was visiting researcher at the Center for Magnetic Resonance Research, University of Minnesota, USA (Prof. Kamil Ugurbil) where he pioneered the use of ultra-high magnetic field (7 Tesla) fMRI in studying the functional organization of the human auditory cortex. Together with his research group, he studies the neural representation and processing of (natural) sounds and auditory scenes in human auditory cortex by combining multimodal functional neuroimaging with machine learning and computational modelling.

This talk is part of the Zangwill Club series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity