University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Connections between kernels, GPs, and NNs

Connections between kernels, GPs, and NNs

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Yingzhen Li.

First, we will follow Radford Neal’s PhD thesis and explain that an infinite Neural Network with random weights is equivalent to a GP. We will then look at several examples of Neural Network kernels and discuss their properties.

In the second part, we are going to talk about GPs and Kernels in the context of regression. We will derive Kernel Ridge Regression and show that it is equivalent to MAP inference in a GP regression model. Along the way we will give a brief introduction to the theory of Reproducing Kernel Hilbert Spaces. Time permitting, we will end by introducing Support Vector Regression – another Kernel regression technique, but one that cannot be viewed as performing MAP inference in any GP model.

There is no required reading. However, if you want to read something, feel free to look at Chapter 2 of Radford Neal’s thesis. html pdf

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity