University of Cambridge > Talks.cam > Statistics > Safe Learning: How to Modify Bayesian Inference when All Models are Wrong

Safe Learning: How to Modify Bayesian Inference when All Models are Wrong

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Richard Samworth.

Standard Bayesian inference can behave suboptimally if the model under consideration is wrong: in some simple settings, the posterior may fail to concentrate even in the limit of infinite sample size. We introduce a test that can tell from the data whether we are in such a situation. If we are, we can adjust the learning rate (equivalently: make the prior lighter-tailed) in a data-dependent way. The resulting “safe” estimator continues to achieve good rates with wrong models. When applied to classification problems, the safe estimator achieves the optimal rates for the Tsybakov exponent of the underlying distribution, thereby establishing a connection between Bayesian inference and statistical learning theory.

This talk is part of the Statistics series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity