University of Cambridge > Talks.cam > CUED Control Group Seminars > Gradient methods for huge-scale optimization problems

Gradient methods for huge-scale optimization problems

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Tim Hughes.

This talk has been canceled/deleted

We consider a new class of huge-scale problems, the problems with sparse gradients. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of the iterations, which total cost depends logarithmically in the dimension. This technique is based on a recursive update of the results of matrix/vector products and the values of symmetric functions. It works well, for example, for matrices with few nonzero diagonals and for max-type functions. We show that the updating technique can be efficiently coupled with the simplest gradient methods. Similar results can be obtained for a new non-smooth random variant of a coordinate descent scheme. We present also the promising results of preliminary computational experiments and discuss extensions of this technique.

This talk is part of the CUED Control Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

This talk is not included in any other list

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity