University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Sampling as Optimization

Sampling as Optimization

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Robert Pinsler.

Sampling and optimization are often thought of as alternative methods for model fitting. In this meeting of the reading group, we summarize recent results that draw connections between sampling and optimization. The key result is the work of Jordan et al. (1998), which shows that the gradient flow of the Kullback-Leibler divergence in the space of measures follows the Fokker-Planck equation. This Fokker-Planck equation can then be recast as running Langevin dynamics in the space of model parameters. With this result established, we then discuss implications for discretization schemes, viewing SGD as approximate Bayesian inference, and models for which sampling can be faster than optimization.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity