COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Machine Learning @ CUED > Scaling and Generalizing Approximate Bayesian Inference
Scaling and Generalizing Approximate Bayesian InferenceAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Louise Segar. Latent variable models have become a key tool for the modern statistician, letting us express complex assumptions about the hidden structures that underlie our data. Latent variable models have been successfully applied in numerous fields. The central computational problem in latent variable modeling is posterior inference, the problem of approximating the conditional distribution of the latent variables given the observations. Posterior inference is central to both exploratory tasks and predictive tasks. Approximate posterior inference algorithms have revolutionized Bayesian statistics, revealing its potential as a usable and general-purpose language for data analysis. Bayesian statistics, however, has not yet reached this potential. First, statisticians and scientists regularly encounter massive data sets, but existing approximate inference algorithms do not scale well. Second, most approximate inference algorithms are not generic; each must be adapted to the specific model at hand. In this talk I will discuss our recent research on addressing these two limitations. I will describe stochastic variational inference, an approximate inference algorithm for handling massive data sets. I will demonstrate its application to probabilistic topic models of text conditioned on millions of articles. Then I will discuss black box variational inference. Black box inference is a generic algorithm for approximating the posterior. We can easily apply it to many models with little model-specific derivation and few restrictions on their properties. I will demonstrate its use on longitudinal models of healthcare data, deep exponential families, and discuss a new black-box variational inference algorithm in the Stan programming language. This is joint work based on these three papers: M. Hoffman, D. Blei, J. Paisley, and C. Wang. Stochastic variational inference. Journal of Machine Learning Research, 14:1303-1347, 2013.
R. Ranganath, S. Gerrish, and D. Blei. Black box variational inference. Artificial Intelligence and Statistics, 2014.
A. Kucukelbir, R. Ranganath, A. Gelman, and D. Blei. Automatic variational inference in Stan. Neural Information Processing Systems, 2015.
This talk is part of the Machine Learning @ CUED series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsOffice of Scholarly Communication Computer Laboratory Opera Group Seminars Milcho Manchevski in Cambridge Cavendish Graduate Students' Conference, December 2009 J M Keynes Fellowship Fund LecturesOther talksNatHistFest: the 99th Conversazione and exhibition on the wonders of the natural world. Migration in Science Bayesian deep learning Auxin, glucosinolates, and drought tolerance: What's the connection? Rhys Jones: Temporal Claustrophobia at the Continental Congress, 1774-1776 |