University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Implicit Variational Inference

Implicit Variational Inference

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact jg801.

Variational inference is frequently the preferred method for modelling complicated posteriors arising from Bayesian inference. Classical variational methods restrict the approximate posterior to the exponential family, which may lead to large amounts of bias in the estimation of model parameters. Many methods have been devised in recent years for allowing more flexible posteriors. In this talk, we will discuss recent advances in variational inference with implicit distributions: distributions from which we can draw samples and compute gradients, but do not have analytic expression for.

Suggested reading: Adversarial Variational Bayes – Mescheder et. al. 2017 Kernel Implicit Variational Inference – Shi et. al. 2018 Semi-Implicit Variational Inference – Yin & Zhou 2018 Unbiased Implicit Variational Inference – Titsias & Ruiz 2019

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity