University of Cambridge > Talks.cam > Statistics > Towards a better understanding of early stopping for boosting algorithms

Towards a better understanding of early stopping for boosting algorithms

Add to your list(s) Download to your calendar using vCal

  • UserYuting Wei, Stanford University
  • ClockFriday 02 November 2018, 16:00-17:00
  • HouseMR12.

If you have a question about this talk, please contact Dr Sergio Bacallado.

In this talk, I will discuss the behaviour of boosting algorithm for non-parametric regression. While non-parametric models offer great flexibility, they can lead to overfitting and thus poor generalisation performance. For this reason, procedures for fitting these models must involve some form of regularisation. Although early-stopping of iterative algorithms is a widely-used form of regularisation in statistics and optimisation, it is less well-understood than its analogue based on penalised regularisation. We exhibit a direct connection between a stopped iterate and the localised Gaussian complexity of the associated function class which allows us to derive explicit and optimal stopping rules. We will discuss such stopping rules in detail for various reproducing kernel Hilbert spaces, and also extend these insights to broader classes of functions.

This talk is part of the Statistics series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity