Active Learning
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Konstantina Palla.
This RCC is going to be about active learning. I will start by giving some motivating examples for where active learning can be useful in experimental sciences and in large-scale machine learning applications. I will draw a rough taxonomy of active learning methods, mentioning the difference between transductive and inductive active learning, loss-oriented vs. information theoretic approaches. I will introduce a simple toy model for linearly separable binary classification, and use it to illustrate the idea behind different approaches to information theoretic active learning. I will talk in detail about the methods proposed by Tong and Koller (2001), resulting in one of the most highly cited papers in machine learning, and contrast it with related methods based on as query by committee. I’m also going to touch upon relatively recent theoretical results by Steve Hanneke on the fast rates of convergence certain active learning methods can achieve.
Recommended reading
http://jmlr.csail.mit.edu/papers/volume2/tong01a/tong01a.pdf
http://www.stat.cmu.edu/~shanneke/docs/2009/active-rates-annals.pdf
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|