COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Information Theory Seminar > Sharp convergence guarantees for iterative (non)-convex empirical risk minimization with random data
Sharp convergence guarantees for iterative (non)-convex empirical risk minimization with random dataAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Dr Varun Jog. Fitting a model to data typically involves applying an iterative algorithm to minimize an empirical risk. However, given a particular empirical risk minimization problem, the process of algorithm selection is often performed via either expensive trial-and-error or appeal to (potentially) conservative worst case efficiency estimates and it is unclear how to compare and contrast algorithms in a principled and meaningful manner. In this talk, we present one potential avenue to obtain fine-grained, principled comparisons between iterative algorithms. We provide a framework—based on Gaussian comparison inequalities—to characterize the trajectory of an iterative algorithm run with sample-splitting on a set of nonconvex model-fitting problems with Gaussian data. We use this framework to demonstrate concrete separations in the convergence behavior of several algorithms as well as to reveal some nonstandard convergence phenomena. This talk is part of the Information Theory Seminar series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCollaboration Skills Initiative French History Apothecaries and native materia medica in early colonial Lima, PeruOther talksShort Cuts Spatially Explicit Mathematical Methods for Quantifying Infection Transmission Events Physical principles of controlling condensates by reactions Using wearable-derived cardiac signals to predict cardiovascular events ESG: A Fundamental Critique Lifecycle of a constant: e |