University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Herding or a '3rd way to learn'

Herding or a '3rd way to learn'

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Shakir Mohamed.

In this RCC , we will cover a new learning approach quite recently proposed (2009) by Max Welling and peculiarly called “Herding”. Even though this is brand new research, it contains interesting links with standard machine learning concepts that we will review. Also, we will split the RCC in two parts which will show the interesting evolution typical in research from the original idea (with the first paper) to a more modern viewpoint (with its latest paper).

“Herding” is a novel approach to learning which seems a bit peculiar at first sight but has interesting properties and actually good empirical performance. It can be seen as a ‘3rd way to learn’: the first one being traditional frequentist point estimates of parameters and the second using Bayesian posterior over parameters. Herding lies somewhat in between: rather than maintaining samples over parameters as in Bayesian learning, it navigates the parameter space in a deterministic (but almost chaotic) way which don’t converge to any particular point estimate. During this exploration, it produces pseudosamples which can be used in a similar fashion one would use samples from a MCMC method after learning. The two main advantages of herding are that:
  1. it is computationally cheaper than traditional learning of Markov Random Field by learning & doing inference in one deterministic step;
  2. the resulting pseudo-samples exhibit strong anticorrelations, thereby providing a more uniform coverage of the space—this means that the number of (pseudo)samples needed to approximate relevant integrals is substantially smaller than in the case of random sampling (namely, has O(1/T) convergence vs. O(1/sqrt(T)) for iid sampling).
In the first half of the RCC , Simon will cover the original herding paper:
  • Herding Dynamical Weights to Learn, Max Welling, ICML 2009 - pdf
    which introduced herding as an algorithm for learning/sampling in the context of Markov random fields (and exhibits property 1. mentioned above). The first part therefore concentrates on links with previous approaches to learning under Markov random fields.
    [Note that if you are interested enough in the topic to read very carefully this paper, there is a correction in the proof of recurrence here .]
In the second half of the RCC , Ferenc will concentrate on the most recent paper:
  • Super-Samples from Kernel Herding, Yutian Chen, Max Welling, Alex Smola, UAI 2010 - pdf
    in which kernel herding is introduced and present a more modern viewpoint on herding in the context of approximating distributions (and where property 2. mentioned above is exploited). Herding here can be understood as a greedy optimisation for approximating a probability distribution with the empirical distribution of pseudosamples in such a way that certain nonlinear moments of the original distribution are preserved. Connections to message passing, Monte Carlo, and other relevant methods will be discussed.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity