COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > CCIMI Seminars > Stochastic variants of classical optimization methods, with complexity guarantees
Stochastic variants of classical optimization methods, with complexity guaranteesAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact J.W.Stevens. Optimization is a key component of machine learning application, as it helps with training of (neural net, nonconvex) models and parameter tuning. Classical optimization methods are challenged by the scale of machine learning applications and the lack of /cost of full derivatives, as well as the stochastic nature of the problem. On the other hand, the simple approaches that the machine learning community uses need improvement. Here we try to merge the two perspectives and adapt the strength of classical optimization techniques to meet the challenges of data science applications: from deterministic to stochastic problems, from typical to large scale. We propose a general algorithmic framework and complexity analysis that allows the use of inexact, stochastic and even possibly biased, problem information in classical methods for nonconvex optimization. This work is joint with Katya Scheinberg (Cornell), Jose Blanchet (Columbia) and Matt Menickelly (Argonne). This talk is part of the CCIMI Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsFestival of the Annunciation, Fitzwilliam Museum LMS Invited Lectures 2011 IfM SeminarsOther talksDeveloping new genetic tools to unravel Cryptosporidium transmission Imaging Clinic Capturing Complexity in Improvisation Research: The Influence of Precondition REMEMBERING THE PAST FOR THE FUTURE |