COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Information Engineering Distinguished Lecture Series > Fundamental Limits of Learning with Feedforward and Recurrent Neural Networks
Fundamental Limits of Learning with Feedforward and Recurrent Neural NetworksAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Prof. Ramji Venkataramanan. Deep neural networks have led to breakthrough results in numerous practical machine learning tasks such as image classification, image captioning, control-policy-learning to play the board game Go, and most recently the prediction of protein structures. In this lecture, we will attempt to understand some of the structural and mathematical reasons driving these successes. Specifically, we study what is possible in principle if no constraints are imposed on the learning algorithm and on the amount and quality of training data. The guiding theme will be a relation between the complexity of the objects to be learned and the networks approximating them, with the central result stating that universal Kolmogorov-optimality is achieved by feedforward neural networks in function learning and by recurrent neural networks in dynamical system learning. This talk is part of the Information Engineering Distinguished Lecture Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsMiseq Seminar How to write a successful grant application Greece and its HistoryOther talksWhen turbulence first reaches the wall Graph Neural Networks through the lens of algebraic topology, differential geometry, and PDEs Semigroup properties for multi-dimensional fractional integral operators Poster Prize and Final Remarks Global Views of Mammalian Development Lecture 4: Adam Smith in Mesopotamia |