COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Statistics > Explicit stabilised Runge-Kutta methods and their application to Bayesian inverse problems
Explicit stabilised Runge-Kutta methods and their application to Bayesian inverse problemsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Dr Sergio Bacallado. The concept of Bayesian inverse problems provides a coherent mathematical and algorithmic framework that enables researchers to combine mathematical models with the (often vast) datasets routinely available today in many fields of engineering science and technology. The ability to solve such inverse problems depends crucially on the efficient calculation of quantities relating to the posterior distribution, giving rise to computationally challenging high dimensional optimization and sampling problems. In this talk, we will connect the corresponding optimization and sampling problems to the large time behaviour of solutions to (stochastic) differential equations. Establishing such a connection allows utilising existing knowledge from the field of numerical analysis of differential equations. In particular, numerical stability is key for a good performing optimization or sampling algorithm since the larger the time-step used while the limiting behaviour of the underlying differential equation is preserved, the more computationally efficient an algorithm is. With this in mind we will explore the applicability of explicit stabilised Runge-Kutta methods for optimization and sampling problems; These methods are optimal in terms of their stability properties within the class of explicit integrators and we will show that when used as optimization methods they match the optimal convergence rate of the conjugate gradient method for quadratic optimization problems. Numerical investigations indicate that in the general case they are able to outperform state of the art optimization methods like Nesterov’s accelerated method. In the case of sampling, we will investigate their applicability to Bayesian inverse problems arising in computational imaging. An additional complexity arises there due to the fact that many of them contain non-differentiable terms, which when regularised lead to extra stiffness, hence making explicit stabilised methods even more suitable for these problems as illustrated by a range of numerical experiments that show that for the same computational cost as current state of the arts methods, explicit stabilised methods deliver much better MCMC samples. This is joint work with Armin Eftekhari (EPFL), Bart Vandereycken (Geneva), Gilles Vilmart (Geneva), Marcelo Pereyra (Heriot-Watt) and Luis Vargas (Edinburgh) This talk is part of the Statistics series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsBacteriophage 2017 Cambridge University Polish Society The Encyclopaedia of Literature in African LanguagesOther talksPast, present and future of Drosophila Research Cardiac imaging: the hero and the villain in the story of regenerative medicine The molecular evolution of C4 photosynthesis Sir Richard Stone Annual Lecture 2019: Firms and Growth Phasevarions of bacterial pathogens: shedding new light on old enemies Potential theory for nonreversible dynamics and corrections to the hydrodynamical limit |