University of Cambridge > > Machine Learning Reading Group @ CUED > Neural Tangent Kernel

Neural Tangent Kernel

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact James Allingham.

Zoom link available upon request (it is sent out on our mailing list, eng-mlg-rcc [at] Sign up to our mailing list for easier reminders.

Partially motivated by the observation that neural network performance reliably improves with size, the study of infinitely wide networks is a promising step towards a theory of deep learning. In this presentation we cover the basics of the neural tangent kernel (an important theoretical tool for the study of infinite-width nets) and how it is relevant to finite-width neural networks.

Required reading: none

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity