University of Cambridge > > NLIP Seminar Series > Evaluating Deep Generative Models on Out-of-Distribution Inputs

Evaluating Deep Generative Models on Out-of-Distribution Inputs

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact James Thorne.

Generative models are widely believed to be more robust to out-of-training-distribution inputs than conditional (i.e. predictive) models. In this talk, I challenge this assumption. We find that the density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses from those of house numbers, assigning a higher likelihood to the latter when the model is trained on the former. We posit that this phenomenon is caused by a mismatch between the model’s typical set and its areas of high probability density. In-distribution inputs should reside in the former but not necessarily in the latter. To determine whether or not inputs reside in the typical set, we propose a computationally efficient hypothesis test using the empirical distribution of model likelihoods. Experiments show that this test succeeds in detecting out-of-distribution inputs in many cases in which previously proposed threshold-based techniques fail.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity