University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Convex Optimisation

Convex Optimisation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Shakir Mohamed.

Optimization is a fundamental tool in applied computer science. We aim to give a broad overview of convex optimization with examples relevant to machine learning.
  1. What is convexity? BV 3 .1, 3.2
  2. Quasiconvexity and unimodality. BV 3 .4
  3. Duality, KKT conditions. BV 5 .1-5.3, 5.5.
  4. Newton’s method, quadratic convergence. BV 9 .5.
  5. Conjugate gradient. NW 5 .1
  6. Line search methods, Wolfe conditions. NW 3 .1.
  7. Quasi-Newton methods, i.e. BFGS . NW 6.1
  8. Interior point methods. BV 11 .2
  9. Software: minFunc and CVX

References: BV = Stephen Boyd and Lieven Vandenberghe, Convex Optimization.

Available free here: http://www.stanford.edu/~boyd/cvxbook/ NW = Jorge Nocedal and Stephen Wright, Numerical Optimization, 2006.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity