University of Cambridge > Talks.cam > Microsoft Research Cambridge, public talks > Three Small Steps ... to Reconceiving Machine Learning

Three Small Steps ... to Reconceiving Machine Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.

I will show by way of three separate illustrations my work on my long term project of reconceiving machine learning. After a brief introduction to the project, I will show a new and explicit characterisation of the convexity of proper composite losses (the composition of a proper binary loss or scoring rule with a link function). Such losses are the appropriate choice for binary class probability estimation. Second I will show an apparently novel relationship between M-estimators (where one maximises an objective function) and L-estimators (linear combinations of order statistics). Finally I will merely sketch an intriguing connection between the design of loss functions for prediction problems and different uncertainty calculi that have been developed in the economics literature. Intriguingly, there are results that show that even if one starts from a pure “Bayesian” perspective, one is inexorably lead to nonlinear expectations that do not fit within the framework of probability theory. The conclusion is that to do a proper job of being the “new science of uncertainty” machine learning needs to look well beyond the theory of probability.

This talk is part of the Microsoft Research Cambridge, public talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity