University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Random projection ensemble classification

Random projection ensemble classification

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Yingzhen Li.

We introduce a very general method for high-dimensional classification, based on careful combination of the results of applying an arbitrary base classifier to random projections of the feature vectors into a lower-dimensional space. In one special case presented here, the random projections are divided into non-overlapping blocks, and within each block we select the projection yielding the smallest estimate of the test error. Our random projection ensemble classifier then aggregates the results of applying the base classifier on the selected projections, with a data-driven voting threshold to determine the final assignment. Our theoretical results elucidate the effect on performance of increasing the number of projections. Moreover, under a boundary condition implied by the sufficient dimension reduction assumption, we control the test excess risk of the random projection ensemble classifier. A simulation comparison with several other popular high-dimensional classifiers reveals its excellent finite-sample performance. This is joint work with Richard Samworth.

Paper: http://arxiv.org/abs/1504.04595

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity