University of Cambridge > Talks.cam > Machine Learning @ CUED > Scaling and Generalizing Approximate Bayesian Inference

Scaling and Generalizing Approximate Bayesian Inference

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Louise Segar.

Latent variable models have become a key tool for the modern statistician, letting us express complex assumptions about the hidden structures that underlie our data. Latent variable models have been successfully applied in numerous fields.

The central computational problem in latent variable modeling is posterior inference, the problem of approximating the conditional distribution of the latent variables given the observations. Posterior inference is central to both exploratory tasks and predictive tasks. Approximate posterior inference algorithms have revolutionized Bayesian statistics, revealing its potential as a usable and general-purpose language for data analysis.

Bayesian statistics, however, has not yet reached this potential. First, statisticians and scientists regularly encounter massive data sets, but existing approximate inference algorithms do not scale well. Second, most approximate inference algorithms are not generic; each must be adapted to the specific model at hand.

In this talk I will discuss our recent research on addressing these two limitations. I will describe stochastic variational inference, an approximate inference algorithm for handling massive data sets. I will demonstrate its application to probabilistic topic models of text conditioned on millions of articles. Then I will discuss black box variational inference. Black box inference is a generic algorithm for approximating the posterior. We can easily apply it to many models with little model-specific derivation and few restrictions on their properties. I will demonstrate its use on longitudinal models of healthcare data, deep exponential families, and discuss a new black-box variational inference algorithm in the Stan programming language.

This is joint work based on these three papers:

M. Hoffman, D. Blei, J. Paisley, and C. Wang. Stochastic variational inference. Journal of Machine Learning Research, 14:1303-1347, 2013.

http://www.cs.columbia.edu/blei/papers/HoffmanBleiWangPaisley2013.pdf

R. Ranganath, S. Gerrish, and D. Blei. Black box variational inference. Artificial Intelligence and Statistics, 2014.

http://www.cs.columbia.edu/blei/papers/RanganathGerrishBlei2014.pdf

A. Kucukelbir, R. Ranganath, A. Gelman, and D. Blei. Automatic variational inference in Stan. Neural Information Processing Systems, 2015.

http://www.cs.columbia.edu/~blei/papers/KucukelbirRanganathGelmanBlei2015.pdf

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity