University of Cambridge > Talks.cam > Rainbow Interaction Seminars > Synthesizing Expressions using Facial Feature Point Tracking: How Emotion is Conveyed

Synthesizing Expressions using Facial Feature Point Tracking: How Emotion is Conveyed

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Lech Swirski.

Many approaches to the analysis and synthesis of facial expressions rely on automatically tracking landmark points on human faces. However, this approach is usually chosen because of ease of tracking rather than its ability to convey affect. We have conducted an experiment that evaluated the perceptual importance of 22 such automatically tracked feature points in a mental state recognition task. The experiment compared mental state recognition rates of participants who viewed videos of human actors and synthetic characters (physical android robot, virtual avatar, and virtual stick figure drawings) enacting various facial expressions.

In this talk I will present the results of our experiment and the implications they have for facial feature analysis and synthesis.

This talk is part of the Rainbow Interaction Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity