University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Variational Bayes as Surrogate Regression

Variational Bayes as Surrogate Regression

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Elre Oldewage.

Variational Bayes is a useful approximate inference framework in which an intractable posterior distribution is approximated by simpler tractable one. The extent to which this is useful (usually) depends on how closely this approximation matches reality, and how quickly it can be obtained. We’ll present lines of work that utilise the posteriors of tractable models as this approximation, and the interesting inference algorithms that arise in this setting.

Although we’ll cover all of these in the presentation, it will be helpful to have some familiarity with the basics of variational Bayes (e.g. what the ELBO is), variational autoencoders and the idea of amortised inference, exponential families, and Gaussian processes. A basic understanding of natural gradients would also be helpful, but is not essential.

If you have the time, please read this: Opper, Manfred, and Cédric Archambeau. “The variational Gaussian approximation revisited.” Neural computation 21.3 (2009): 786-792.

Extra reading if you have time on your hands:
  1. Bui, Thang D., et al. “Partitioned variational inference: A unified framework encompassing federated and continual learning.” arXiv preprint arXiv:1811.11206 (2018).
  2. Ashman, Matthew, et al. “Sparse Gaussian Process Variational Autoencoders.” arXiv preprint arXiv:2010.10177 (2020). Khan, Mohammad Emtiyaz, and Didrik Nielsen. “Fast yet simple natural-gradient descent for variational inference in complex models.” 2018 International Symposium on Information Theory and Its Applications (ISITA). IEEE , 2018.
  3. Chang, Paul E., et al. “Fast variational learning in state-space Gaussian process models.” 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE , 2020.
  4. Johnson, Matthew James, et al. “Composing graphical models with neural networks for structured representations and fast inference.” Proceedings of the 30th International Conference on Neural Information Processing Systems. 2016.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity