University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Pseudo-Points and State-Space Gaussian Processes

Pseudo-Points and State-Space Gaussian Processes

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Elre Oldewage.

There have been a couple of interesting papers [1, 2] combining pseudo-point approximations and Markov structure in GPs to produce even more scalable approximate inference techniques for long time series. We’ll review these, in particular the tricks needed to make them work. A basic familiarity with GPs and pseudo-point approximations [3, 4] would be helpful, as would having skimmed [5].

[1] – Adam, Vincent, et al. “Doubly sparse variational Gaussian processes.” International Conference on Artificial Intelligence and Statistics. PMLR , 2020.

[2] – Wilkinson, William, Arno Solin, and Vincent Adam. “Sparse Algorithms for Markovian Gaussian Processes.” International Conference on Artificial Intelligence and Statistics. PMLR , 2021.

[3] – Titsias, Michalis. “Variational learning of inducing variables in sparse Gaussian processes.” Artificial intelligence and statistics. PMLR , 2009.

[4] – Hensman, James, Nicolò Fusi, and Neil D. Lawrence. “Gaussian processes for Big data.” Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence. 2013.

[5] – Opper, Manfred, and Cédric Archambeau. “The variational Gaussian approximation revisited.” Neural computation 21.3 (2009): 786-792.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity