Random Projections
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Konstantina Palla.
A fundamental result of Johnson and Lindenstrauss states that one may randomly project a collection of data points into a lower dimensional space while preserving pairwise point distances. How is this possible? Recent developments have gone even further: non-linear randomised projections can be used to approximate kernel machines and scale them to datasets with millions of features and samples. In this talk we will explore the theoretical aspects of the great Random Projection method, and perform live demos to demonstrate its effectiveness.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|