Spectral Learning
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Konstantina Palla.
Over the past few years, “spectral methods” have been applied with great success to parameter estimation in latent variable models. For this talk, we will explain how spectral methods work in two models of wide interest: hidden Markov models (HMMs) and latent-variable probabilistic context-free grammars (LPCFGs). In the process, we will see that these methods rely only on linear and multilinear algebra, making them highly efficient. We will then go on to discuss what is perhaps their most significant edge: consistency guarantees that are conspicuously absent from classical estimation techniques like EM (though we will see that the story is more complicated than one might think). Finally, we will introduce the unified theoretical perspective on these algorithms that has recently emerged.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|