![]() |
COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
![]() Neighbourhood Components AnalysisAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Phil Cowans. Say you want to do K-Nearest Neighbour classification. Besides selecting K, you also have to chose a distance function, in order to define “nearest”. I’ll talk about a two new methods for learning —from the data itself—a distance measure to be used in KNN classification. One algorithm, Neighbourhood Components Analysis (NCA) directly maximizes a stochastic variant of the leave-one-out KNN score on the training set. The other (just submitted to NIPS !) tries to collapse all points in the same class as close together as possibe. Both algorithms can also learn a low-dimensional linear embedding of labeled data that can be used for data visualization and very fast classification in high dimensions. Of course, the resulting classification model is non-parametric, making no assumptions about the shape of the class distributions or the boundaries between them. (Joint work with Jacob Goldberger and Amir Globerson) This talk is part of the Inference Group series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsKavli Institute for Cosmology - Summer Series TQS Journal Clubs Centre for Animal Welfare & Anthrozoology SeminarsOther talksBabraham Lecture - Deciphering the gene regulation network in human germline cells at single-cell & single base resolution Bullion or specie? The role of Spanish American silver coins in Europe and Asia throughout the 18th century Downstream dispersion of bedload tracers A Bourdiesian analysis of songwriting habitus |