University of Cambridge > Talks.cam > NLIP Seminar Series > Bayesian Smoothing for Language Models

Bayesian Smoothing for Language Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Ekaterina Kochmar.

Smoothing is a central component of language modelling technologies. It attempts to improve probabilities estimated from language data by shifting mass from high probability areas to low or zero probability areas, thus “smoothing” the distribution. Many smoothing techniques have been proposed in the past based on a variety of principles and empirical observations.

In this talk I will present a Bayesian statistical approach to smoothing. By using a hierarchical Bayesian methodology to effectively share information across the different parts of the language model, and by incorporating the prior knowledge that languages obey power-law behaviours using Pitman-Yor processes, we are able to construct language models with state-of-the-art results. Our approach also gives an interesting new interpretation of interpolated Kneser-Ney and why it works so well. Finally, we describe an extension of our model from finite n-grams to “infinite-grams” which we call the sequence memoizer.

This is joint work with Frank Wood, Jan Gasthaus, Cedric Archambeau and Lancelot James, and is based on work most recently reported in the Communications of the ACM (Feb 2011 issue).

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity