University of Cambridge > Talks.cam > Machine Learning @ CUED > Efficient Inference and Learning with Intractable Posteriors? Yes, Please.

Efficient Inference and Learning with Intractable Posteriors? Yes, Please.

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Jes Frellsen.

We discuss a number of recent advances in Stochastic Gradient Variational Inference (SGVI).

  • Blending ideas from variational inference, deep learning and stochastic optimization, we derive an algorithm for efficient gradient-based inference and learning with intractable posteriors.
  • Applied to deep latent-variable models with neural networks as components, this results in the Variational Auto-Encoder (VAE), a principled Bayesian auto-encoder. We show that VAEs can be useful for semi-supervised learning and analogic reasoning.
  • Further improvements are realized through a new variational bound with auxiliary variables. Markov Chain Monte Carlo (MCMC) can be cast as variational inference with auxiliary variables; this interpretation allows principled optimization of MCMC parameters to greatly improve MCMC efficiency.
  • When applying SGVI to global parameters, we show how an order of magnitude of variance reduction can be achieved through local reparameterization while retaining parallelizability. Gaussian Dropout can be cast as a special case of such SGVI with a scale-free prior. This variational interpretation of dropout allows for simple optimization of dropout rates.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity