COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Statistics > Towards a better understanding of early stopping for boosting algorithms
Towards a better understanding of early stopping for boosting algorithmsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Dr Sergio Bacallado. In this talk, I will discuss the behaviour of boosting algorithm for non-parametric regression. While non-parametric models offer great flexibility, they can lead to overfitting and thus poor generalisation performance. For this reason, procedures for fitting these models must involve some form of regularisation. Although early-stopping of iterative algorithms is a widely-used form of regularisation in statistics and optimisation, it is less well-understood than its analogue based on penalised regularisation. We exhibit a direct connection between a stopped iterate and the localised Gaussian complexity of the associated function class which allows us to derive explicit and optimal stopping rules. We will discuss such stopping rules in detail for various reproducing kernel Hilbert spaces, and also extend these insights to broader classes of functions. This talk is part of the Statistics series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsRomance Syntax Seminar 'Go Far, Go Together' - Creating an Innovation EnvironmentOther talksRenormalisation group and critical phenomena - 4 POSTPONED: Chaperone networks coping with protein aggregates and amyloids Why do we have three families of quarks and leptons? ---- Supersymmetric E_7 nonlinear sigma model Joinings of higher rank diagonalizable actions |