![]() |
COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
University of Cambridge > Talks.cam > CUED Control Group Seminars > Gradient methods for huge-scale optimization problems
![]() Gradient methods for huge-scale optimization problemsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Rachel Fogg. We consider a new class of huge-scale problems, the problems with sparse gradients. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of the iterations, which total cost depends logarithmically in the dimension. This technique is based on a recursive update of the results of matrix/vector products and the values of symmetric functions. It works well, for example, for matrices with few nonzero diagonals and for max-type functions. We show that the updating technique can be efficiently coupled with the simplest gradient methods. Similar results can be obtained for a new non- smooth random variant of a coordinate descent scheme. We present also the promising results of preliminary computational experiments and discuss extensions of this technique. Biography: Yurii Nesterov is a professor at the Catholic University of Louvain, Belgium, where he is a member of the Center for Operations Research and Econometrics (CORE). He is the author of 4 monographs and more than 80 refereed papers in leading optimization journals. He was awarded with the Dantzig Prize 2000 given by SIAM and the Mathematical Programming Society (for research having a major impact on the field of mathematical programming), the John von Neumann Theory Prize 2009 given by INFORMS , the Charles Broyden prize 2010 (for the best paper in Optimization Methods and Software journal), and the Honorable Francqui Chair (University of Liège, 2011-2012). The main direction of his research is the development of efficient numerical methods for convex and nonconvex optimization problems supported by a global complexity analysis. The most important results are obtained for general interior-point methods (theory of self-concordant functions), fast gradient methods (smoothing technique), and global complexity analysis of the second-order schemes (cubic regularization of the Newton’s method). This talk is part of the CUED Control Group Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCambridge University Polish Society Tencent Talk: Fast and Furious Explore the Ever-changing Digital China Department of Psychiatry & CPFT Thursday Lunchtime Seminar SeriesOther talksPhylogenetic hypothesis of the Oleeae tribe (Oleaceae) Diversification and molecular evolution patterns in plastid and nuclear ribosomal DNA Leveraging the imaging power of the Beacon platform Cambridge-Lausanne Workshop 2018 - Day 2 Replication or exploration? Sequential design for stochastic simulation experiments Visual Analytics for High-Dimensional Data Exploration and Engineering Design |