Spectral Learning and Decoding for Natural Language Parsing
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Tamara Polajnar.
Spectral methods have received considerable attention in the past in the machine learning and the NLP communities. Most recently, they
have been applied to latent-variable modelling. In this case, their two
most clear advantages over the use of EM is their computational efficiency and the sound theory behind them (they are not prone to local maxima like EM).
In this talk, I will present two distinct uses of the spectral method for
natural language parsing. I will describe a learning algorithm for latent-variable PCF Gs, a very useful model for constituent parsing. I will also describe our use of tensor decomposition for speeding up parsing inference. Here, we approximate the underlying model by using a tensor decomposition algorithm, and this approximation permits us to use fast inference with dynamic programming.
If time permits, I will also touch on the use of spectral decomposition algorithms for unsupervised learning.
Joint work with Michael Collins, Dean Foster, Ankur Parikh, Giorgio Satta, Karl Stratos, Lyle Ungar, Eric Xing
This talk is part of the NLIP Seminar Series series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|