Convex Optimisation
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Shakir Mohamed.
Optimization is a fundamental tool in applied computer science. We aim to give a broad overview of convex optimization with examples relevant to machine learning.
- What is convexity? BV 3 .1, 3.2
- Quasiconvexity and unimodality. BV 3 .4
- Duality, KKT conditions. BV 5 .1-5.3, 5.5.
- Newton’s method, quadratic convergence. BV 9 .5.
- Conjugate gradient. NW 5 .1
- Line search methods, Wolfe conditions. NW 3 .1.
- Quasi-Newton methods, i.e. BFGS . NW 6.1
- Interior point methods. BV 11 .2
- Software: minFunc and CVX
References:
BV = Stephen Boyd and Lieven Vandenberghe, Convex Optimization.
Available free here: http://www.stanford.edu/~boyd/cvxbook/
NW = Jorge Nocedal and Stephen Wright, Numerical Optimization, 2006.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|