University of Cambridge > Talks.cam > Statistics > Conditional Predictive Inference Post-Model Selection

Conditional Predictive Inference Post-Model Selection

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Richard Nickl.

We give a finite-sample analysis of predictive inference procedures after model selection in regression with random design. The analysis is focused on a statistically challenging scenario where the number of potentially important explanatory variables can be infinite, where no regularity conditions are imposed on unknown parameters, where the number of explanatory variables in a `good’ model can be of the same order as sample size, and where the number of candidate models can be of larger order than sample size. The performance of inference procedures is evaluated conditional on the training sample. Under weak conditions on only the number of candidate models and on their complexity, and uniformly over all data-generating processes under consideration, we show that a certain prediction interval is approximately valid and short with high probability in finite samples, in the sense that its actual coverage probability is close to the nominal one, and in the sense that its length is close to the length of an infeasible interval that is constructed by actually knowing the ‘best’ candidate model. Similar results are shown to hold for predictive inference procedures other than prediction intervals like, e.g., tests of whether a future response will lie above or below a given threshold.

This talk is part of the Statistics series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity