Deep Gaussian Processes
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Rowan McAllister.
Abstract
Deep Gaussian processes are a probabilistic approach to modelling data that employ a hierarchical structure with nonparametric mappings between layers. This family of models is attractive as it can potentially discover layers of increasingly abstract representations of data, in much the same way as their deep parametric counterparts, but it can also handle and propagate uncertainty in the hierarchy. Standard inference and learning schemes, however, are not analytically tractable for this model class. In this talk, we will review deep Gaussian processes and their relationship to other models, before describing a recently developed variational free energy treatment that sidesteps the analytical intractabilities and which provides approximate posteriors and a tractable lower bound to the marginal likelihood.
Recommended reading:
The suggest reading is Damianou and Lawrence 2013, but for more detail on the flavour of variational inference used in deep GPs and related models, see Titsias and Lawrence 2010 and Damianou et al 2014.
- Damianou, Andreas, and Neil Lawrence. Deep Gaussian Processes. Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics. 2013.
- Titsias, Michalis K., and Neil D. Lawrence. Bayesian Gaussian Process Latent Variable Model. International Conference on Artificial Intelligence and Statistics. 2010.
- A. Damianou, M. Titsias and N. Lawrence. Variational Inference for Uncertainty on the Inputs of Gaussian Process Models. 2014.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|