University of Cambridge > > Microsoft Research Cambridge, public talks > Convergent and Scalable Algorithms for Expectation Propagation Approximate Bayesian Inference

Convergent and Scalable Algorithms for Expectation Propagation Approximate Bayesian Inference

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

This event may be recorded and made available internally or externally via Microsoft will own the copyright of any recordings made. If you do not wish to have your image/voice recorded please consider this before attending


The expectation propagation (or adaptive TAP ) relaxation stands out among variational relaxations of Bayesian inference, when it comes to generality and accuracy of results. It is widely used in machine learning today. Applied to large scale continuous variable models for inverse problems in imaging and computer vision, commonly used solvers lack convergence proofs and are too slow to be useful. In this talk, we describe a novel EP algorithm which is both provably convergent and can be scaled up to large densely connected models, drawing a connection between the double loop algorithm of Opper and Winther (JMLR 2005) and earlier work by the author on scalable algorithms for simpler relaxations. Even for problems of moderate size (such as Gaussian process classification with a few thousand training points), the new algorithm converges at least an order of magnitude faster than the standard (sequential) EP algorithm.

Partly joint work with Hannes Nickisch.

Speaker information: Professor Matthias Seeger runs the Laboratory for Probabilistic Machine Learning (LAPMAP) at EPFL ,

This talk is part of the Microsoft Research Cambridge, public talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity