Bayesians turn to experts for advice!
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Rachel Fogg.
In Bayesian model selection and model averaging, inference is normally based on a posterior distribution on the models, usually interpreted as a measure of how likely we consider each of the models to be “true”, or at least in some sense close to true, given the observations.
Rather than with truth, I will be concerned with the more practical goal of finding a “useful” model, in the sense that it predicts future outcomes of the underlying process well. As it turns out, the most useful model may well vary depending on the number of available observations! For instance, given ten samples from some continuous density, a seven-bin histogram model is more useful than a 1,000-bin model, even though the latter is arguably closer to being “true”.
As it turns out, methods for tracking transient performance of prediction strategies have already been developed in the learning theory literature under the heading “prediction with expert advice”. I will illustrate how these methods can improve model selection performace using results from computer simulations on density estimation problems.
This talk is part of the Signal Processing and Communications Lab Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|