University of Cambridge > Talks.cam > CCIMI Short Course: Tamara Broderick (MIT) > Variational Bayes and Beyond: Foundations of Scalable Bayesian Inference

Variational Bayes and Beyond: Foundations of Scalable Bayesian Inference

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact J.W.Stevens.

Bayesian methods exhibit a number of desirable properties for modern data analysis—-including (1) coherent quantification of uncertainty, (2) a modular modeling framework able to capture complex phenomena, (3) the ability to incorporate prior information from an expert source, and (4) interpretability. In practice, though, Bayesian inference necessitates approximation of a high-dimensional integral, and some traditional algorithms for this purpose can be slow—-notably at data scales of current interest. The tutorial will cover the foundations of some modern tools for fast, approximate Bayesian inference at scale. One increasingly popular framework is provided by “variational Bayes” (VB), which formulates Bayesian inference as an optimization problem. We will examine key benefits and pitfalls of using VB in practice, with a focus on the widespread “mean-field variational Bayes” (MFVB) subtype. We will highlight properties that anyone working with VB, from the data analyst to the theoretician, should be aware of. And we will discuss a number of open challenges.

This talk is part of the CCIMI Short Course: Tamara Broderick (MIT) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity