Optimal weighted nearest neighbour classifiers
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Shakir Mohamed.
Classifiers based on nearest neighbours are perhaps the simplest
and most intuitively appealing of all nonparametric classifiers. Arguably
the most obvious defect with the $k$-nearest neighbour classifier is that
it places equal weight on the class labels of each of the $k$ nearest
neighbours to the point being classified. Intuitively, one would expect
improvements in terms of the misclassification rate to be possible by
putting decreasing weights on the class labels of the successively more
distant neighbours. In this talk, we determine the asymptotically optimal
weighting scheme, and quantify the benefits attainable. Notably, the
improvements depend only on the dimension of the feature vectors, and not
on the underlying population densities. We also show that the bagged
nearest neighbour classifier falls within our framework, and compare it
with the optimal weighted nearest neighbour classifier.
The talk will be based on the following paper
http://arxiv.org/abs/1101.5783
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|