University of Cambridge > Talks.cam > Rainbow Group Seminars > How do brains encode facial expression movements?

How do brains encode facial expression movements?

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Marwa Mahmoud.

Nick Furl is a neuroscientist at the MRC Cognition and Brain Sciences Unit where he uses human brain imaging to investigate face perception. Recognition of facial expressions from dynamic stimuli presents a difficult computational problem and little is known about how the brain solves it. Unfortunately, prevailing theory considers almost entirely studies using static photographs. Nick’s approach is to use video stimuli to examine how brain areas respond to facial movement information and to test how their responses relate to representations of facial expression categories. He will discuss previous neuroscience research on movement perception, including his recent study showing that facial expression categories can be decoded from movement-sensitive areas in the monkey brain. This raises a hypothesis that these movement-sensitive areas recognise expression categories from motion cues. The next step is to quantify specific motion cues from video, a field in which computer vision has already made much progress. These quantifications can then be related to imaging data to discover how the brain assembles motion information into representations of facial expression categories.

This talk is part of the Rainbow Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity