University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > CBL Alumni Talk: Accurate Gaussian Processes and how they can help Deep Learning

CBL Alumni Talk: Accurate Gaussian Processes and how they can help Deep Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Elre Oldewage.

In my opinion, model selection is the most appealing capability of Bayesian inference, which has the most to offer in deep learning. However, performing Bayesian model selection requires accurate approximate inference. In the first part of the talk, I will discuss accurate inference in the fundamental building block of deep neural networks: a single layer. Specifically, I will focus on the Gaussian process (GP) representation of neural network layers, and present some recent work on inducing point and conjugate gradient approximations, while paying close attention to the question of what we should expect from methods that we consider “good” or even “exact”. In the second part of the talk, I will discuss how these techniques can be of use in model selection in deep learning, with examples on learning invariances. I will close off with some thoughts on how these ideas may develop in the future.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity