University of Cambridge > Talks.cam > CUED Speech Group Seminars > Machines that can read lips

Machines that can read lips

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Jie Pu.

This talk will be on zoom

Abstract: Decades of research in acoustic speech recognition have led to systems that we use in our everyday life. However, even the most advanced speech recognition systems fail in the presence of noise. The degraded performance can be (partially) addressed by introducing visual speech information. In this talk, we will see how deep learning has made this possible and also present our works in visual speech recognition (lip-reading).

Bio: Pingchuan Ma is a fourth-year Ph.D. student in the Intelligent Behaviour Understanding Group (IBUG) at Imperial College London, advised by Prof. Maja Pantic and Dr. Stavros Petridis. Before that, He received an MSc degree in Machine Learning from Imperial College London in 2017 and received a BSc degree in Automation from Beihang University in 2015. He was a research intern at Facebook AI Applied Research (FAIAR) in 2021.

This talk is part of the CUED Speech Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity