University of Cambridge > > Machine Learning @ CUED > New Methods in Bayesian Optimization for Machine Learning

New Methods in Bayesian Optimization for Machine Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Jes Frellsen.

Bayesian optimization is a methodology for the global optimization of expensive, noisy and multimodal black-box functions. When applied to the optimization of hyperparameters and model parameters of machine learning algorithms, this provides a reproducible and efficient methodology for running experiments that often outperforms domain experts. This talk will highlight our recent work on tailoring Bayesian optimization to problems in machine learning. Developing a principled statistical framework in the context of Bayesian optimization allows us to reason about useful additional information and model the specific structure of the problem. Pertaining to the former, I will discuss Bayesian optimization with multiple related tasks and reasoning about various forms of constraints. Then I will talk about our recent work on using a model tailored to optimization curves to forecast the result of a training run long before it is finished. This allows us to reason about when to stop training a model, when to start training a new one, and when to revisit an already partially trained one. If time permits, I will also discuss our recent efforts in reaching out to the wider scientific community to optimize expensive experiments in various other domains such as robotics and aeronautics.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity