COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Machine Learning @ CUED > Beam Sampling for Infinite Hidden Markov Models
Beam Sampling for Infinite Hidden Markov ModelsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Carl Edward Rasmussen. The Infinite Hidden Markov Model (iHMM) [1,2] is an extension of the classical Hidden Markov Model widely used in machine learning and bioinformatics. As a tool to model sequential data, Hidden Markov Models suffer from the need to specify the number of hidden states. Although model selection and model averaging are widely used in this context, the Infinite Hidden Markov Model offers a nonparametric alternative. The core idea of the iHMM is to use Dirichlet Processes to define the distribution of the rows of a Markov Model transition matrix. As such, the number of used states can automatically be adapted during learning; or can be integrated over for prediction. Until now, the Gibbs sampler was the only known inference algorithm for the iHMM. This is unfortunate as the Gibbs sampler is known to be weak for strongly correlated data; which is often the case in sequential or time series data. Moreover, it is suprising that we have powerful inference algorithms for finite HMM ’s (the forward-backward or Baum-Welch dynamic programming algorithms) but cannot apply these methods for the iHMM. In this work, we propose a method called the “Beam Sampler” which combines ideas from slice sampling and dynamic programming for inference in the iHMM. We show that the beam sampler has some interesting properties such as: (1) it is less susceptible to strong correlations in the data than the Gibbs sampler, (2) it can handle non-conjugacy in the model more easily than the Gibbs sampler. We also show that the scope of the beam sampler idea goes beyond training the Infinite Hidden Markov Model, but can also be used to efficiently train finite HMM ’s. This talk is part of the Machine Learning @ CUED series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsGraduate Union talks Cambridge Statistics Clinic Thinking Society: How is understanding possible?Other talksLunchtime Talk: Helen's Bedroom Questions of Morality in Global Health- An interdisciplinary conference Short-Selling Restrictions and Returns: a Natural Experiment Cohomology of the moduli space of curves A domain-decomposition-based model reduction method for convection-diffusion equations with random coefficients Laser Printed Organic Electronics, Metal-Organic Framework - Polymer Nanofiber Composites for Gas Separation The evolution of photosynthetic efficiency An SU(3) variant of instanton homology for webs Cyclic Peptides: Building Blocks for Supramolecular Designs Katie Field - Symbiotic options for the conquest of land Immigration and Freedom Active Machine Learning: From Theory to Practice |