University of Cambridge > Talks.cam > Machine Learning @ CUED > On Different Distances Between Distributions and Generative Adversarial Networks

On Different Distances Between Distributions and Generative Adversarial Networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact dsu21.

Generative adversarial networks (GANs) are notoriously difficult to train. At the core of it, we show that these problems arise naturally when trying to learn distributions whose support lie in low dimensional manifolds. We show how these problems are consequences of trying to optimize the classical divergences (KL, JSD , etc) between our real and data distribution, and that these are symptoms of a more general phenomenon, pointing towards the inefficacy of the usual divergences in certain settings. After that, we bring into play the Wasserstein distance, which we prove doesn’t suffer from the same behaviour, and provide a first step on an algorithm that tries to approximately optimize this distance.

Relevant papers:

Towards Principled Methods for Training Generative Adversarial Networks

Wasserstein GAN

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity