University of Cambridge > Talks.cam > Rainbow Interaction Seminars > Can we hear people think, can we discern genuine laughter … and can a computer do it better?

Can we hear people think, can we discern genuine laughter … and can a computer do it better?

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Daniel Bernhardt.

In this talk I will present an automatic inference system of expressions of emotions and mental states from non-verbal speech which outperforms human accuracy. The inference is based on a new set of vocal features including music derived features that have not been used before in this context. I will describe how an arbitrary set of expressions can be used to automatically map a set of several hundred lexical definitions of mental states using the inference machine, and find the vocal correlates of a lexically oriented topology of mental states. I will also show how findings from one language (English) can be used to automatically infer expressions in another language (Hebrew), demonstrating expressions in human-computer interaction (HCI) which involves decision-making, the complex behaviour of expressions in time, and the verification through multi-modal and context analysis.

This talk is part of the Rainbow Interaction Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity