Stochastic Algorithms for Nonconvex Optimization
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Fulvio Forni.
Nowadays it is quite common to solve optimization problems in 10^9 or more variables. At these levels, it is not practical to use the “true” gradient of the objective function. Instead, a variety of methods are based on using approximate gradients, which are also random; in other words, they are stochastic gradients. Often, only some components of the argument are updated at each iteration, to reduce storage calls. As a result, nowadays optimization algorithms produce stochastic processes, as opposed to sequences of vectors in some Euclidean space. Another issue is that the objective function is not convex.
The seminar will be held in LR3A , Department of Engineering, and online (zoom): https://newnham.zoom.us/j/92544958528?pwd=YS9PcGRnbXBOcStBdStNb3E0SHN1UT09
This talk is part of the CUED Control Group Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|