University of Cambridge > Talks.cam > Statistics > Robust inference for intractable likelihood models using kernel divergences

Robust inference for intractable likelihood models using kernel divergences

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Qingyuan Zhao.

Modern statistics and machine learning tools are being applied to increasingly complex phenomenon, and as a result make use of increasingly complex models. A large class of such models are the so-called intractable likelihood models, where the likelihood is either too computational expensive to evaluate, or impossible to write down in closed form. This creates significant issues for classical approach such as maximum likelihood estimation or Bayesian inference, which are entirely reliant on evaluations of a likelihood. In this talk, we will cover several novel inference schemes which by-pass this issue. These will be constructed from kernel-based discrepancies such as maximum mean discrepancies and kernel Stein discrepancies, and can be used either in a frequentist or Bayesian framework. An important feature of our approach is that it will be provably robust, in the sense that a small number of outliers or mild model misspecification will not have a significant impact on parameter estimation. In particular, we will show how the choice of kernel can allow us to trade statistical efficiency with robustness. The methodology will then be illustrated on a range of intractable likelihood models in signal processing and biochemistry.

This talk is part of the Statistics series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity