![]() |
COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
University of Cambridge > Talks.cam > Rainbow Interaction Seminars > Multimodal Affective Inference for Driver Monitoring
Multimodal Affective Inference for Driver MonitoringAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Christian Richardt. We present a novel system for driver-vehicle interaction which combines speech recognition with facial-expression recognition to increase intention recognition accuracy in the presence of engine- and road-noise. Our system would allow drivers to interact with in-car devices such as satellite navigation and other telematic or control systems. We describe a pilot study and experiment in which we tested the system, and show that multimodal fusion of speech and facial expression recognition provides higher accuracy than either would do alone. This talk is part of the Rainbow Interaction Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsTracing Human Ancestry, using DNA john's list Short Course: Higher order regularisation in imagingOther talksLouisiana Creole - a creole at the periphery Simulating wave propagation in elastic systems using the Finite-Difference-Time-Domain method Dispersion for the wave and the Schrodinger equations outside strictly convex obstacles Tying Knots in Wavefunctions Disease Migration Diagnosing diseases of childhood: a bioarchaeological and palaeopathological perspective |