You need to be logged in to carry this out. If you don't have an account, feel free to create one. |
COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > ML@CL Seminar Series > Progress Towards Understanding Generalization in Deep Learning
Progress Towards Understanding Generalization in Deep LearningAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact . Passcode: 381314 There is, as yet, no satisfying theory explaining why common learning algorithms, like those based on stochastic gradient descent, generalize in practice on overparameterized neural networks. I will discuss various approaches that have been taken to explaining generalization in deep learning, and identify some of the barriers these approaches faced. I will then discuss my recent work on information-theoretic and PAC Bayesian approaches to understanding generalization in noisy variants of SGD . In particular, I will highlight how we can take advantage of conditioning to obtain sharper data and distribution-dependent generalization measures. I will also briefly touch upon my work on properties of the optimization landscape and some of the challenges we face incorporating these insights into the theory of generalization. This talk is part of the ML@CL Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsMicrostructural Kinetics Group - Department of Materials Science & Metallurgy Churchill CompSci Talks Vertical Readings in Dante's 'Comedy'Other talksThe Statistical Finite Element Method The CERF Cavalcade Sensor CDT Industry Lecture: Daniel Craig from AstraZeneca Sensor CDT Industry Lecture: Perfexia DocMe App The Biology of Eating Mechanics, Additive Manufacture, and Characterisation of Lattice Biostructures |