University of Cambridge > Talks.cam > CUED Speech Group Seminars > Don't multiply lightly: exploring how DNN depth interacts with HMM independence assumptions in hybrid HMM/DNN's used for ASR

Don't multiply lightly: exploring how DNN depth interacts with HMM independence assumptions in hybrid HMM/DNN's used for ASR

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact ar527.

While hybrid hidden Markov model/neural network (HMM/DNN) acoustic models have replaced HMM /GMMs in automatic speech recognition (ASR) due to performance improvements, the HMM ’s conditional independence assumptions are still unrealistic. In this work we explore the extent to which the depth of neural networks helps compensate for these poor conditional independence assumptions. Using a resampling framework that allows us to control the amount of data dependence in the test set – while still using real observations from the data – we can determine how robust neural networks, and particularly deeper models, are to data dependence. Our conclusions are that if the data were to match the conditional independence assumptions of the HMM , there would be little benefit from using deeper models. It is only when data become more dependent that depth improves ASR performance. That performance substantially degrades, however, as the data becomes more realistic suggests that better temporal modeling is still needed for ASR . This is joint work with Suman Ravuri.

This talk is part of the CUED Speech Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity