University of Cambridge > Talks.cam > Applied and Computational Analysis > Optimal Newton-type methods for nonconvex smooth optimization

Optimal Newton-type methods for nonconvex smooth optimization

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Carola-Bibiane Schoenlieb.

This talk addresses global rates of convergence and the worst-case evaluation complexity of methods for nonconvex optimization problems. We show that the classical steepest-descent and Newton’s methods for unconstrained nonconvex optimization under standard assumptions may both require a number of iterations and function evaluations arbitrarily close to the steepest-descent’s global worst-case complexity bound. This implies that the latter upper bound is essentially tight for steepest descent and that Newton’s method may be as slow as the steepest-descent method in the worst case. Then the cubic regularization of Newton’s method (Griewank (1981), Nesterov & Polyak (2006)) is considered and extended to large-scale problems, while preserving the same order of its improved worst-case complexity (by comparison to that of steepest-descent); this improved worst-case bound is also shown to be tight. We further show that the cubic regularization approach is, in fact, optimal from a worst-case complexity point of view amongst a wide class of second-order methods for nonconvex optimization. The worst-case problem-evaluation complexity of constrained optimization will also be discussed. This is joint work with Nick Gould (Rutherford Appleton Laboratory, UK) and Philippe Toint (University of Namur, Belgium).

This talk is part of the Applied and Computational Analysis series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity