University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Generalisation in neural networks

Generalisation in neural networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Robert Pinsler.

It is not very well understood at the moment why large neural networks with more parameters than training data generalise well from training data to test data. This talk explores the main hurdles and potential future avenues to improve our understanding of these models. In particular, we are going to look at a few approaches from Statistical Learning Theory to prove generalisation properties of neural networks. First, we examine a more traditional approach to that bounds the capacity of learning models (VC dimension) followed by a review of the more recent approaches that utilise information theory to prove generalisation.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity