University of Cambridge > > Language Technology Lab Seminars > Do Deep Generative Models Know What They Don't Know?

Do Deep Generative Models Know What They Don't Know?

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Edoardo Maria Ponti.

Abstract: A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such overconfident mistakes as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this talk, I challenge this assumption, focusing analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find that the model density cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR -10) from those of house numbers (i.e. SVHN ), assigning a higher likelihood to the latter when the model is trained on the former. We find such behavior persists even when we restrict the flows to constant-volume transformations. These admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature. Our results suggest caution when using density estimates of deep generative models on out-of-distribution inputs.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity