University of Cambridge > Talks.cam > Information Engineering Distinguished Lecture Series > Fundamental Limits of Learning with Feedforward and Recurrent Neural Networks

Fundamental Limits of Learning with Feedforward and Recurrent Neural Networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Ramji Venkataramanan.

Deep neural networks have led to breakthrough results in numerous practical machine learning tasks such as image classification, image captioning, control-policy-learning to play the board game Go, and most recently the prediction of protein structures. In this lecture, we will attempt to understand some of the structural and mathematical reasons driving these successes. Specifically, we study what is possible in principle if no constraints are imposed on the learning algorithm and on the amount and quality of training data. The guiding theme will be a relation between the complexity of the objects to be learned and the networks approximating them, with the central result stating that universal Kolmogorov-optimality is achieved by feedforward neural networks in function learning and by recurrent neural networks in dynamical system learning.

This talk is part of the Information Engineering Distinguished Lecture Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2022 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity