University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Fundamental limits of deep generative neural networks

Fundamental limits of deep generative neural networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

MDL - Mathematics of deep learning

Deep neural networks have been employed very successfully as generative models for complex natural data such as images and natural language. In practice this is realized by training deep networks so that they realize high-dimensional probability distributions by transforming simple low-dimensional distributions such as uniform or Gaussian. The aim of this talk is to develop understanding of the fundamental representational capabilities of deep generative neural networks. Specifically, we show that every d-dimensional probability distribution of bounded support can be generated through deep ReLU networks out of a 1-dimensional uniform input distribution. What is more, this is possible without incurring a cost—in terms of approximation error as measured in Wasserstein-distance—relative to generating the d-dimensional target distribution from d independent random variables. This is enabled by a space-filling approach which elicits the importance of network depth in driving the Wasserstein distance between the target distribution and its neural network approximation to zero. Finally, we show that  the number of bits needed to encode the corresponding generative networks equals the fundamental limit for encoding probability distributions as dictated by quantization theory.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity