University of Cambridge > Talks.cam > Rainbow Interaction Seminars > Multimodal Affective Inference for Driver Monitoring

Multimodal Affective Inference for Driver Monitoring

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Christian Richardt.

We present a novel system for driver-vehicle interaction which combines speech recognition with facial-expression recognition to increase intention recognition accuracy in the presence of engine- and road-noise. Our system would allow drivers to interact with in-car devices such as satellite navigation and other telematic or control systems. We describe a pilot study and experiment in which we tested the system, and show that multimodal fusion of speech and facial expression recognition provides higher accuracy than either would do alone.

This talk is part of the Rainbow Interaction Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity