Hyperparameter Optimisation
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Robert Pinsler.
As machine learning models become more complex, so do the spaces of their hyperparameters – those parameters not directly concerned with the model’s adaptability, but rather the design of its training method or architecture. With the exposure of pathological comparability issues in the literature, and the suggestion that many respected contributions see marked improvement under a more systematic configuration selection strategy, hyperparameter optimisation is fast becoming an essential component of the research workflow. In this talk, we will survey the evolution of modern hyperparameter optimisation, from the key elements of fundamental algorithms, such as Bayesian optimisation, to state-of-the-art methods for intelligent and efficient optimisation, such as BOHB . We will also discuss the practicalities of implementing hyperparmeter optimsiation in research projects, including an overview of suitable libraries and off-the-shelf implementations.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|