University of Cambridge > > Information Engineering Distinguished Lecture Series > A mathematical theory of deep neural networks

A mathematical theory of deep neural networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Prof. Ramji Venkataramanan.

Postponed due to COVID-19

During the past decade deep neural networks have led to spectacular successes in a wide range of applications such as image classification and annotation, handwritten digit recognition, speech recognition, and game intelligence. In this talk, we describe efforts to develop a mathematical theory that can explain these impressive practical achievements and possibly guide future deep learning architectures and algorithms. Specifically, we develop the fundamental limits of learning in deep neural networks by characterizing what is possible in principle. We then attempt to explain the inner workings of deep generative networks and of scattering networks. A brief survey of recent results on deep networks as solution engines for PDEs is followed by considerations of interesting open problems and philosophical remarks on the role of mathematics in AI research.

This talk is part of the Information Engineering Distinguished Lecture Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity