University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Stochastic gradient with least-squares control variates

Stochastic gradient with least-squares control variates

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

RCL - Representing, calibrating & leveraging prediction uncertainty from statistics to machine learning

The stochastic gradient (SG) method is a widely used approach for solving stochastic optimization problems, but its convergence is typically slow. Existing variance reduction techniques, such as SAGA , improve convergence by leveraging stored gradient information; however, they are restricted to settings where the objective functional is a finite sum, and their performance degrades when the number of terms in the sum is large. In this work, we propose a novel approach that is best suited when the objective is given by an expectation over random variables with a continuous probability distribution. Our method constructs a control variate by fitting a linear model to past gradient evaluations using weighted discrete least-squares, effectively reducing variance while preserving computational efficiency. We establish theoretical sublinear convergence guarantees and demonstrate the method’s effectiveness through numerical experiments on random PDE -constrained optimization problems.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity