University of Cambridge > Talks.cam > Information Theory Seminar > Sharp convergence guarantees for iterative (non)-convex empirical risk minimization with random data

Sharp convergence guarantees for iterative (non)-convex empirical risk minimization with random data

Add to your list(s) Download to your calendar using vCal

  • UserDr. Kabir Verchand, University of Cambridge World_link
  • ClockWednesday 11 October 2023, 14:00-15:00
  • HouseMR5, CMS Pavilion A.

If you have a question about this talk, please contact Dr Varun Jog.

Fitting a model to data typically involves applying an iterative algorithm to minimize an empirical risk. However, given a particular empirical risk minimization problem, the process of algorithm selection is often performed via either expensive trial-and-error or appeal to (potentially) conservative worst case efficiency estimates and it is unclear how to compare and contrast algorithms in a principled and meaningful manner. In this talk, we present one potential avenue to obtain fine-grained, principled comparisons between iterative algorithms. We provide a framework—based on Gaussian comparison inequalities—to characterize the trajectory of an iterative algorithm run with sample-splitting on a set of nonconvex model-fitting problems with Gaussian data. We use this framework to demonstrate concrete separations in the convergence behavior of several algorithms as well as to reveal some nonstandard convergence phenomena.

This talk is part of the Information Theory Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity