University of Cambridge > Talks.cam > Cambridge-Imperial Computational PhD Seminar > Accelerating Variance-Reduced Stochastic Gradient Methods

Accelerating Variance-Reduced Stochastic Gradient Methods

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Matthew Colbrook.

Variance reduction is a crucial tool for improving the slow convergence of stochastic gradient descent. Only a few variance-reduced methods, however, have yet been shown to directly benefit from Nesterov’s acceleration techniques to match the convergence rates of accelerated gradient methods. In this work, we show develop a universal acceleration framework that allows all popular variance-reduced methods to achieve accelerated convergence rates. The constants appearing in these rates, including their dependence on the dimension n, scale with the mean-squared-error and bias of the gradient estimator. In a series of numerical experiments, we demonstrate that versions of the popular gradient estimators SAGA , SVRG, SARAH , and SARGE using our framework significantly outperform non-accelerated versions.

This talk is part of the Cambridge-Imperial Computational PhD Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity