University of Cambridge > Talks.cam > Machine Learning @ CUED > Convex Variational Bayesian Inference for Large Scale Generalized Linear Models

Convex Variational Bayesian Inference for Large Scale Generalized Linear Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Carl Edward Rasmussen.

Bayesian inference for most generalized linear models is analytically not tractable. We show that a well known variational relaxation leads to a convex problem for any log-concave model and provide a generic double loop algorithm for solving it on models with arbitrary super-Gaussian potentials. We iteratively decouple the criterion, so that most of the computational work is done by solving large linear systems, rendering our algorithm much faster than previously proposed solvers. We evaluate our method on problems of Bayesian active learning for large binary classification models.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity