COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Information Engineering Distinguished Lecture Series > A mathematical theory of deep neural networks
A mathematical theory of deep neural networksAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Prof. Ramji Venkataramanan. Postponed due to COVID-19 During the past decade deep neural networks have led to spectacular successes in a wide range of applications such as image classification and annotation, handwritten digit recognition, speech recognition, and game intelligence. In this talk, we describe efforts to develop a mathematical theory that can explain these impressive practical achievements and possibly guide future deep learning architectures and algorithms. Specifically, we develop the fundamental limits of learning in deep neural networks by characterizing what is possible in principle. We then attempt to explain the inner workings of deep generative networks and of scattering networks. A brief survey of recent results on deep networks as solution engines for PDEs is followed by considerations of interesting open problems and philosophical remarks on the role of mathematics in AI research. This talk is part of the Information Engineering Distinguished Lecture Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsQueens' Linguistics Fest 2012 Clare Hall Centenary Year of the Medical Research Council and International Year of StatisticsOther talksThree-dimensional instability dynamics in jets Functional MRI: physics and physiology Art speak Comprehensive modeling of materials for solar applications from first principles Sliding Modes for Estimation: theory and practice Coffee Stains, Cell Receptors and Time Crystals: Lessons from the Old Literature |