University of Cambridge > Talks.cam > Machine Learning @ CUED > Convergence bounds for the Random Walk Metropolis algorithm - Perspectives from Isoperimetry

Convergence bounds for the Random Walk Metropolis algorithm - Perspectives from Isoperimetry

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr H Ge.

Abstract: When carrying out inference in probabilistic models, a recurring task is to examine high-dimensional probability measures with complex structure and ‘make sense of’ them in a suitable way. By now, a wide range of practical solutions for this task exist, each offering their own tradeoffs between computational efficiency, statistical accuracy, practical robustness, and beyond. By analogy with fields such as optimisation, we might now seek to answer questions like “which classes of probability measure can be understood efficiently?”, as well as quantitative, algorithm-specific versions of this question. This encourages the rigorous comparison of existing methods, and can guide the design of improved methods.

In recent work, we study this question in the context of the Random Walk Metropolis algorithm, a simple Markov chain-based iterative algorithm for sampling from probability measures, given only access to an unnormalised density. Our analysis highlights the key role of ‘isoperimetry’, a geometric notion for probability measures which appears to robustly capture the complexity of understanding probability measures with local algorithms.

In this talk, I will present our theoretical results about the convergence behaviour of the Random Walk Metropolis algorithm (as well as related results for the Preconditioned Crank-Nicolson algorithm for sampling from GP posteriors), and contextualise the role which isoperimetry plays in enabling these results. If time permits, I will also offer high-level comments on some potential implications of isoperimetry for related approximate inference methodologies (e.g. VI, NFs, DDMs).

(joint work with Christophe Andrieu, Anthony Lee, and Andi Wang; preprint available at https://arxiv.org/abs/2211.08959)

Bio: Sam Power is a postdoctoral research associate at the University of Bristol. His research centres around the design and analysis of stochastic algorithms, with applications to problems in modern statistics and machine learning. Some particular interests include Markov chain Monte Carlo and Sequential Monte Carlo algorithms, and how theoretical studies of these methods can enable their robust, automatic, and efficient deploymeent.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity