University of Cambridge > > Machine Learning @ CUED > Variational Inference for Non-Conjugate Models

Variational Inference for Non-Conjugate Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Zoubin Ghahramani.

Many statistical techniques, such as the computation of the data likelihood in the presence of nuisance parameters, or the prediction in the presence of missing data, the computation of the posterior distribution over parameters can be simply expressed as high dimensional integration problems for which standard numerical approximation tools cannot be directly applied. The first part of the talk will introduce Split Variational Inference, a generic way of computing large scale non-Gaussian integrals by splitting them into a sum of small pieces that are easier to approximate by unnormalized Gaussian distributions. This leads to an any-time improving algorithm that can be viewed as a generalization of mixture-mean-field (variational algorithm where the approximating family is a mixture). The second part of the talk will present recent developments on the use of variational bounds to solve large scale factor analysis/matrix factorization problems when data are heterogeneous (i.e. when there are both discrete and continuous observations) and heteroscedastic (i.e. when the data variance is not the same for all the observed entities).

Collaborators involved in these works include Onno Zoeter, Matthias Seeger, Cedric Archambeau, Balaji Lakshminarayanan, Emtiyaz Khan, Ben Marlin and Kevin Murphy.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity