University of Cambridge > Talks.cam > Machine Learning @ CUED > Learning to Learn without Gradient Descent by Gradient Descent

Learning to Learn without Gradient Descent by Gradient Descent

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Adrian Weller.

Abstract: We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.

Bio: Yutian is a senior research scientist at DeepMind. He obtained his PhD in 2013 under the supervision of Prof. Max Welling at the University of California, Irvine. He worked on efficient training and sampling algorithms for probabilistic graphical models. After graduation Yutian moved to the University of Cambridge to do a postdoc with Prof. Zoubin Ghahramani in the Computational and Biological Learning Lab, where he had been working on scaling up Bayesian inference methods for large-scale problems. His current research is focused on using deep learning and reinforcement learning methods to apply probabilistic models to real-world challenging problems.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity