BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Rainbow Interaction Seminars
SUMMARY:Synthesizing Expressions using Facial Feature Poin
 t Tracking: How Emotion is Conveyed - Tadas Baltru
 saitis (University of Cambridge)
DTSTART;TZID=Europe/London:20101014T141500
DTEND;TZID=Europe/London:20101014T151500
UID:TALK27239AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/27239
DESCRIPTION:Many approaches to the analysis and synthesis of f
 acial expressions rely on automatically tracking l
 andmark points on human faces. However\, this appr
 oach is usually chosen because of ease of tracking
  rather than its ability to convey affect. We have
  conducted an experiment that evaluated the percep
 tual importance of 22 such automatically tracked f
 eature points in a mental state recognition task. 
 The experiment compared mental state recognition r
 ates of participants who viewed videos of human ac
 tors and synthetic characters (physical android ro
 bot\, virtual avatar\, and virtual stick figure dr
 awings) enacting various facial expressions.\n\nIn
  this talk I will present the results of our exper
 iment and the implications they have for facial fe
 ature analysis and synthesis.
LOCATION:Computer Laboratory\, William Gates Building\, Roo
 m SS03
CONTACT:Lech Swirski
END:VEVENT
END:VCALENDAR
