University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Polynomial time guarantees for sampling based posterior inference

Polynomial time guarantees for sampling based posterior inference

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

SSDW04 - Monte Carlo sampling: beyond the diffusive regime

The Bayesian approach provides a flexible framework for a wide range of non-parametric inference problems. It crucially relies on computing functionals with respect to the posterior distribution, such as the posterior mean or posterior quantiles for uncertainty quantification. Since the posterior is rarely available in closed form, inference is based on Markov chain Monte Carlo (MCMC) sampling algorithms. The runtime of these algorithms until a given target precision is achieved will typically scale exponentially in the model dimension and the sample size. In contrast, in this talk we will see that sampling based posterior inference in a general high-dimensional setup is feasible, even without global structural assumptions such as strong log-concavity of the posterior. Given a sufficiently good initialiser, we present polynomial-time convergence guarantees for a widely used gradient based MCMC sampling scheme. The key idea is to combine posterior contraction with the local curvature induced by the Fisher-information of the statistical model near the data generating truth. We will discuss applications to high-dimensional logistic and Gaussian regression, as well as to density estimation.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity