University of Cambridge > Talks.cam > CUED Control Group Seminars > Some Applications of the Kullback-Leibler Divergence Rate in Hidden Markov Models

Some Applications of the Kullback-Leibler Divergence Rate in Hidden Markov Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Guy-Bart Stan.

The Kullback-Leibler (K-L) divergence rate between stochastic processes is a generalization of the familiar K-L divergence between probability vectors. In this talk, the K-L divergence rate is introduced, and an easy derivation is given of the K-L divergence rate between two Markov chains over a common alphabet. This formula is used to solve the problem of approximating a Markov chain with “long” memory by another with “short” memory in an optimal fashion. The difficulties in extending this result to HMMs are explained. Finally, a geometrically convergent estimate for the K-L divergence rate between two HMMs is provided.

This talk is part of the CUED Control Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity