COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > ML@CL Seminar Series > Information-Theoretic Generalization Bounds for Stochastic Gradient Descent
Information-Theoretic Generalization Bounds for Stochastic Gradient DescentAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact . We study the generalization properties of the popular stochastic gradient descent method for optimizing general non-convex loss functions. Our main contribution is providing upper bounds on the generalization error that depend on local statistics of the stochastic gradients evaluated along the path of iterates calculated by SGD . The key factors our bounds depend on are the variance of the gradients (with respect to the data distribution) and the local smoothness of the objective function along the SGD path, and the sensitivity of the loss function to perturbations to the final output. Our key technical tool is combining the information-theoretic generalization bounds previously used for analyzing randomized variants of SGD with a perturbation analysis of the iterates. This talk is part of the ML@CL Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsDimer observables and Cauchy-Riemann operators Vital Geographies - Department of Geography The Ellen McArthur Lectures 2013Other talksGranular Flows Eduardo Machicado - gloknos 'Epistemologies of Land' Webcast Climate Risk and the Pandemic Wave motion and beaming in 2D periodic structures Observational constraints on the likelihood of 26Al in planet-forming environments Science & Engineering |