Statistical Learning Theory
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Alessandro Davide Ialongo.
Abstract
The wikipedia definition of Statistical Learning Theory says that it is a framework for machine learning drawing from the fields of statistics and functional analysis. It deals with the problem of finding a predictive function based on data. According to Vapnik (1990), “abstract learning theory established some conditions for generalization which are more general than those discussed in classical statistical paradigms and the understanding of these conditions inspired new algorithmic approaches to function estimation problems.”
I will cover some introductory material, including the following topics:
Different loss functions
Learning algorithms: Empirical risk minimisation and regularisation
How can we solve each problem (first order methods only)
VC dimension and some bounds
Recommended Reading
Introduction to Statistical Learning Theory, Bousquet, O., Boucheron, S. and Lugosi, G.
MIT Statistical Learning Theory and Applications course and RegML summer school notes by Lorenzo Rosasco
An Overview of Statistical Learning theory, Vapnik, V.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|