University of Cambridge > Talks.cam > NLIP Seminar Series > Deep learning for automatically assessing the pronunciation of non-native English speakers

Deep learning for automatically assessing the pronunciation of non-native English speakers

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Andrew Caines.

In this talk, I will present my work on automatically characterizing the pronunciation of non-native English speakers based on spontaneous utterances. I will begin by defining what we mean by pronunciation and the challenges presented by spontaneous, as opposed to read-aloud, speech. I will explore two systems based on distances between phones: a two-stage system using K-L divergence between Gaussian models as an input to a DNN , and an end-to-end system using Siamese LST Ms with attention over the hidden representations. It will be seen how both systems can predict human-assigned grade, as well as speaker L1 and country of origin, and how their representations can be interpreted. Finally, I will discuss my approach to the detection of individual pronunciation errors and how it relates to the auto-marking system.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity