How close are these distributions? A brief introduction to statistical distances and divergences.
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Elre Oldewage.
The question of whether two probability distributions are `close’ to each other arises in many contexts in statistics and machine learning, including hypothesis testing, approximate Bayesian inference and generative modelling. However, what it means for two probability distributions to be `close’ is somewhat subtle. In this reading group, we will give an overview of some of the different ways of measuring distance between probability distributions, as well as relationships between these notions of distance.
Required Reading:
None.
References and Further (Optional) Reading:
- On Integral Probability Metrics, φ-Divergences and
Binary Classification. Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf and Gert R.G. Lanckrie. 2009.
- A Kernel Two-Sample Test. Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, Alexander Smola.
- Lecture Notes on Information Theory. Yury Polyanskiy and Yihong Wu. http://www.stat.yale.edu/~yw562/teaching/itlectures.pdf. (Particularly chapter 6 and 7 on f-divergences).
- Optimal transport, old and new. Cedric Villani. Chapter 6.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|