University of Cambridge > Talks.cam > Statistics > Safe Learning: How to Modify Bayesian Inference when All Models are Wrong

Safe Learning: How to Modify Bayesian Inference when All Models are Wrong

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Richard Samworth.

This talk has been canceled/deleted

Standard Bayesian inference can behave suboptimally if the model under consideration is wrong: in some simple settings, the posterior may fail to concentrate even in the limit of infinite sample size. We introduce a test that can tell from the data whether we are in such a situation. If we are, we can adjust the learning rate (equivalently: make the prior lighter-tailed) in a data-dependent way. The resulting “safe” estimator continues to achieve good rates with wrong models. When applied to classification, the approach achieves optimal rates under Tsybakov’s conditions, thereby creating a bridge between Bayes/MDL and statistical learning-style inference.

This talk is part of the Statistics series.

Tell a friend about this talk:

This talk is included in these lists:

This talk is not included in any other list

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity