University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Bayesian inference and uncertainty quantification in non-linear inverse problems with Gaussian priors

Bayesian inference and uncertainty quantification in non-linear inverse problems with Gaussian priors

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

RCLW02 - Calibrating prediction uncertainty : statistics and machine learning perspectives

We study asymptotic frequentist coverage and approximately Gaussian properties of Bayes posterior credible sets in nonlinear inverse problems when a Gaussian prior is placed on the parameter of the PDE . The aim is to ensure valid frequentist coverage of Bayes credible intervals when estimating continuous linear functionals of the parameter. Our results show that Bayes credible intervals have conservative coverage under certain smoothness assumptions on the parameter and a compatibility condition between the likelihood and the prior, regardless of whether an efficient limit exists or Bernstein von-Mises (BvM) theorem holds. In the latter case, our work yields a result with more relaxed sufficient conditions than previous works. We illustrate the practical utility of the results through the example of estimating the conductivity coefficient of a second order elliptic PDE , where a near-$N^{-1/2}$ contraction rate and conservative coverage results are obtained or linear functionals that were shown not to be estimable efficiently. Bayesian methods are attractive for uncertainty quantification but assume knowledge of the likelihood model or data generation process. This assumption is difficult to justify in many inverse problems. We study a contraction rate of posterior distributions if the model is misspecified. Given a prior distribution and a random sample from a distribution $P_0$, which may not be in the support of the prior, we show that the posterior concentrates its mass near the points in the support of the prior that minimize the Kullback-Leibler divergence with respect to $P_0$. Convexity is not required and the existence of a minimizer is not taken for granted. Joint work with Y. Baek and S. Mukherjee

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity