University of Cambridge > Talks.cam > Frontiers in Artificial Intelligence Series > Coin Betting for Backprop without Learning Rates and More

Coin Betting for Backprop without Learning Rates and More

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

 Please be aware that this event may be recorded. Microsoft will own the copyright of any recording and reserves the right to distribute it as required.

Deep learning methods achieve state-of-the-art performance in many application scenarios. Yet, these methods require a significant amount of hyperparameters tuning in order to achieve the best results. In particular, tuning the learning rates in the stochastic optimization process is still one of the main bottlenecks. In this talk, I will propose a new stochastic gradient descent procedure that does not require any learning rate setting. Contrary to previous methods, we do not adapt the learning rates nor we make use of the assumed curvature of the objective function. Instead, we reduce the optimization process to a game of betting on a non-stochastic coin and we propose an optimal strategy based on a generalization of Kelly betting. Moreover, I’ll show how this reduction can be also used for other machine learning problems. Theoretical convergence is proven for convex and quasi-convex functions and empirical evidence shows the advantage of our algorithm over popular stochastic gradient algorithms

This talk is part of the Frontiers in Artificial Intelligence Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2017 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity