University of Cambridge > > Cambridge Analysts' Knowledge Exchange > Bayesian Nonparametric inference for inverse problems using Gaussian priors

Bayesian Nonparametric inference for inverse problems using Gaussian priors

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Jan Bohr.

Abstract. In a statistical inverse problem one collect noisy indirect measurements of an unknown physical quantity of interest. In the last decade, the Bayesian approach to inference for inverse problems has received increasing attention, mainly due to the fact that a) it can be efficiently implemented in practice using MCMC methods, and b) it provides a principle framework to perform uncertainty quantification and test scientific hypotheses.

The talk will present some results on the validation of Bayesian nonparametric procedures based on standard Gaussian process priors. A general framework is considered, where the posterior distributions arising from a flexible class of Gaussian priors is shown to concentrate around the ground truth generating the data at optimal rate in “prediction-risk”. An important (nonlinear) example consists in the recovery of the diffusivity in an elliptic PDE in divergence form, for which the a convergence rate for the posterior mean estimator to the unknown is obtained. Finally, in a related linear inverse problems, a Bernstein-von Mises limit is derived, that entails the convergence of the posterior distribution to a fixed Gaussian measure whose covariance structure attains the information lower bound. As a result, credible sets are shown to have asymptotically the correct coverage and to shrink at optimal rate.

This talk is part of the Cambridge Analysts' Knowledge Exchange series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2020, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity