University of Cambridge > > Machine Learning Reading Group @ CUED > A Predictive Study of Bayesian Nonparametric Regression Models

A Predictive Study of Bayesian Nonparametric Regression Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact David Duvenaud.

In many situations, the assumptions of the standard linear model are unreasonable due to the presence of non-linearity in the regression function and a non-normal error distribution that may evolve with x. Countably infnite mixture models for the collection of conditional densities provide a flexible tool that can capture such behavior. In this talk, we will review such models and discuss predictive issues that arise from different choices of the covariate dependent weights and atoms. We will particularly focus on the model obtained from a Dirichlet Process mixture model for the joint distribution of the response and covariate and examine the impact of the dimension of the covariate, p, on prediction. We find that even for moderate p, a large number of components will typically be used to estimate the predictive conditional density due to complexity of the marginal of x. To address this issue, we propose to replace the Dirichlet Process with the Enriched Dirichlet Process. This allows for a more fexible local model for x, leading to a smaller number of components and predictive estimates within component to be based on larger sample sizes. The result is more reliable predictive estimates, smaller credible intervals, and less prior influence. Moreover, computations are a simple extensions of those used for the Dirichlet Process mixture model. We demonstrate the advantages of our approach through a simulated example and an application to predict Alzheimer’s Disease status.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2017, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity