University of Cambridge > Talks.cam > Machine Learning @ CUED > Using gradient descent for optimization and learning

Using gradient descent for optimization and learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Simon Lacoste-Julien.

In machine learning, to solve a particular task, we often define a cost function which we are trying to minimize over a set of data points, the training set. There has been extensive work on designing efficient optimization techniques to minimize such functions, amongst them BFGS and conjugate gradient descent. However, this only ensures a fast decrease in the training error, whereas we are ultimately interested in minimizing another error: the test error. Reaching a low test error for a particular problem is called learning. As Bottou has shown, good optimizers may prove to be very bad learning methods and vice-versa.

After reviewing the most popular optimization techniques, I shall make a brief summary of Bottou’s conclusions and then present a new gradient descent algorithm which aims at directly addressing this issue of optimizing the test error using only a training set.

Speaker’s bio:

Nicolas received a Master’s Degree in Applied Maths from the Ecole Centrale Paris and one in Maths, Vision and Learning from the ENS Cachan in 2003. From 2004 to 2008, he did a PhD in Montreal under the supervision of Yoshua Bengio, working on designing and optimizing neural networks. Since 2008, he is a postdoc researcher at Microsoft Research Cambridge, working with John Winn on using deep neural networks for vision.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity