University of Cambridge > > CMI Student Seminar Series > Wasserstein GANs Work Because They Fail (to Approximate the Wasserstein Distance)

Wasserstein GANs Work Because They Fail (to Approximate the Wasserstein Distance)

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Neil Deo.

Wasserstein GANs are based on the idea of minimising the Wasserstein distance between a real and a generated distribution. We provide an in-depth mathematical analysis of differences between the theoretical setup and the reality of training Wasserstein GANs. In this work, we gather both theoretical and empirical evidence that the WGAN loss is not a meaningful approximation of the Wasserstein distance. Moreover, we argue that the Wasserstein distance is not even a desirable loss function for deep generative models, and conclude that the success of Wasserstein GANs can in truth be attributed to a failure to approximate the Wasserstein distance.

This talk is based off of the following joint work:

This talk is part of the CMI Student Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity