COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |

University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > The remarkable flexibility of BART

## The remarkable flexibility of BARTAdd to your list(s) Download to your calendar using vCal - Edward George (University of Pennsylvania )
- Wednesday 30 May 2018, 10:00-11:00
- Seminar Room 2, Newton Institute.
If you have a question about this talk, please contact info@newton.ac.uk. STS - Statistical scalability For the canonical regression setup where one wants to discover the relationship between Y and a p-dimensional vector x, BART (Bayesian Additive Regression Trees) approximates the conditional mean E[Y|x] with a sum of regression trees model, where each tree is constrained by a regularization prior to be a weak learner. Fitting and inference are accomplished via a scalable iterative Bayesian backfitting MCMC algorithm that generates samples from a posterior. Effectively, BART is a nonparametric Bayesian regression approach which uses dimensionally adaptive random basis elements. Motivated by ensemble methods in general, and boosting algorithms in particular, BART is defined by a statistical model: a prior and a likelihood. This approach enables full posterior inference including point and interval estimates of the unknown regression function as well as the marginal effects of potential predictors. By keeping track of predictor inclusion frequencies, BART can also be used for model-free variable selection. To further illustrate the modeling flexibility of BART , we introduce two elaborations, MBART and HBART . Exploiting the potential monotonicity of E[Y|x] in components of x, MBART incorporates such monotonicity with a multivariate basis of monotone trees, thereby enabling estimation of the decomposition of E[Y|x] into its unique monotone components. To allow for the possibility of heteroscedasticity, HBART incorporates an additional product of regression trees model component for the conditional variance, thereby providing simultaneous inference about both E[Y|x] and Var[Y|x]. (This is joint research with Hugh Chipman, Matt Pratola, Rob McCulloch and Tom Shively.) This talk is part of the Isaac Newton Institute Seminar Series series. ## This talk is included in these lists:- All CMS events
- Featured lists
- INI info aggregator
- Isaac Newton Institute Seminar Series
- School of Physical Sciences
- Seminar Room 2, Newton Institute
- bld31
Note that ex-directory lists are not shown. |
## Other listsWhat IS the deal with meat? Machine Learning Reading Group @ CUED Clare College Graduate Research Forum## Other talksThe Science of Goo St Catharine’s Political Economy Seminar – ‘'Modelling the Impacts of Brexit on Low-Income Households' by Sophie Heald, Richard Lewney & Laurie Heykoop China Goes Global: China's AI Challenge: How Confucian Communism Can Help Life in the dark: can deep sea fishes see colours? Role of H3K27me3-mediated genomic imprinting in development and somatic cell nuclear transfer reprogramming ‘We are tax-paying citizens, we deserve attention’: Karachi’s upper-middle class and the politics of governance |