University of Cambridge > > Inference Group > Neighbourhood Components Analysis

Neighbourhood Components Analysis

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Phil Cowans.

Say you want to do K-Nearest Neighbour classification. Besides selecting K, you also have to chose a distance function, in order to define “nearest”. I’ll talk about a two new methods for learning —from the data itself—a distance measure to be used in KNN classification. One algorithm, Neighbourhood Components Analysis (NCA) directly maximizes a stochastic variant of the leave-one-out KNN score on the training set. The other (just submitted to NIPS !) tries to collapse all points in the same class as close together as possibe. Both algorithms can also learn a low-dimensional linear embedding of labeled data that can be used for data visualization and very fast classification in high dimensions. Of course, the resulting classification model is non-parametric, making no assumptions about the shape of the class distributions or the boundaries between them. (Joint work with Jacob Goldberger and Amir Globerson)

This talk is part of the Inference Group series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity