University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > Learning under model misspecification

Learning under model misspecification

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mateja Jamnik.

Join us on Zoom

Bayesian statistics is one of the most employed tools for uncertainty modelling in deep learning. But, in this talk, we will present recent research which cast doubts about the optimality of the Bayesian approach for this task. More precisely, we will present a novel PAC -Bayesian analysis of Bayesian model averaging showing that it is only optimal for generalization when the model class is perfectly specified, something which rarely happens in practice.

By building on this theoretical analysis, we will introduce a novel learning framework based on the minimization of a new family of PAC -Bayesian bounds which explicitly assume that the model class is misspecified (a much more realistic assumption). We will also discuss strong connections with deep ensemble methods.

This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity