|COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring.|
Some Applications of the Kullback-Leibler Divergence Rate in Hidden Markov Models
If you have a question about this talk, please contact Dr Guy-Bart Stan.
The Kullback-Leibler (K-L) divergence rate between stochastic processes is a generalization of the familiar K-L divergence between probability vectors. In this talk, the K-L divergence rate is introduced, and an easy derivation is given of the K-L divergence rate between two Markov chains over a common alphabet. This formula is used to solve the problem of approximating a Markov chain with “long” memory by another with “short” memory in an optimal fashion. The difficulties in extending this result to HMMs are explained. Finally, a geometrically convergent estimate for the K-L divergence rate between two HMMs is provided.
This talk is part of the CUED Control Group Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
Other listsCurrent Issues in Assessment Gypsy Roma Traveller (GRT) History Month CIMR Professional Development Series by the postdoc committe
Other talksMacrophage heterogeneity and renewal during inflammation The fluid mechanics wave-particle duality Mechanics of motility at microscopic scales Art Speak Calculations on Tunneling by Carbon Tell Experimentalists Where to Look and What to Look For 'Composed of the same materials': dressing alike in Victorian art