COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Cambridge-Imperial Computational PhD Seminar > Accelerating Variance-Reduced Stochastic Gradient Methods
Accelerating Variance-Reduced Stochastic Gradient MethodsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Matthew Colbrook. Variance reduction is a crucial tool for improving the slow convergence of stochastic gradient descent. Only a few variance-reduced methods, however, have yet been shown to directly benefit from Nesterov’s acceleration techniques to match the convergence rates of accelerated gradient methods. In this work, we show develop a universal acceleration framework that allows all popular variance-reduced methods to achieve accelerated convergence rates. The constants appearing in these rates, including their dependence on the dimension n, scale with the mean-squared-error and bias of the gradient estimator. In a series of numerical experiments, we demonstrate that versions of the popular gradient estimators SAGA , SVRG, SARAH , and SARGE using our framework significantly outperform non-accelerated versions. This talk is part of the Cambridge-Imperial Computational PhD Seminar series. This talk is included in these lists:Note that ex-directory lists are not shown. |
Other listsNumber theory study group: Iwasawa theory ndk22's list Cambridge EnterpriseOther talksExozodiacal clouds - the local environment of habitable planets Lets Talk About Sex (and Reproduction): Counselling for Reproductive Health in Post-war Europe (12-13 December) Identification of a novel protein complex essential for erythrocyte invasion by human-infective malaria parasites Formation and activation of ovarian follicles using germline stem cells Recognising the Other: religion and society in pre-modern North India How chromatin is spatially reorganised during zygotic reprogramming to totipotency |