University of Cambridge > > Machine Learning @ CUED > Non-parametric Bayesian Method and Maximum-A-Posteriori Inference in Statistical Machine Translation

Non-parametric Bayesian Method and Maximum-A-Posteriori Inference in Statistical Machine Translation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Zoubin Ghahramani.

Since recent sophisticated Machine Learning algorithms implicitly handle various things, practitioners do not need to worry much about how to deploy those algorithms in particular situations. However, if it comes to real-life data such as Statistical Machine Translation, several things were worth considering: 1) the underlying distribution may be better assumed to be the power-law distribution rather than its i.i.d. counterpart, 2) noise may not be captured well as a simple Gaussian type (hence, such noise assumption is not often embedded in the ML algorithm), 3) available prior knowledge may not be sufficiently used, and so forth. It is noted that what kinds of non-Gaussian type noise we need to focus on and what kind of prior knowledge we need to target were not evident from the beginning (These issues would be quite difficult even if we can exploit the domain experts. This is since these require both the knowledge of the underlying ML algorithm and the domain knowledge of the area). We discuss two algorithms in the application area of Statistical Machine Translation: non-parametric Bayesian method (hierarchical Pitman-Yor process related topics) and Maximum-A-Posteriori inference. The first algorithm is related to the language model smoothing where 1) is concerned, while the second algorithm is related to the word alignment where 2) and 3) are concerned.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity