BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Statistics
SUMMARY:Explicit stabilised Runge-Kutta methods and their
application to Bayesian inverse problems - Kostas
Zygalakis\, University of Edinburgh
DTSTART;TZID=Europe/London:20190308T160000
DTEND;TZID=Europe/London:20190308T170000
UID:TALK115927AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/115927
DESCRIPTION:The concept of Bayesian inverse problems provides
a coherent mathematical and algorithmic framework
that enables researchers to combine mathematical m
odels with the (often vast) datasets routinely ava
ilable today in many fields of engineering science
and technology. The ability to solve such inverse
problems depends crucially on the efficient calcu
lation of quantities relating to the posterior dis
tribution\, giving rise to computationally challen
ging high dimensional optimization and sampling pr
oblems. In this talk\, we will connect the corresp
onding optimization and sampling problems to the l
arge time behaviour of solutions to (stochastic) d
ifferential equations. Establishing such a connect
ion allows utilising existing knowledge from the f
ield of numerical analysis of differential equatio
ns. In particular\, numerical stability is key for
a good performing optimization or sampling algori
thm since the larger the time-step used while the
limiting behaviour of the underlying differential
equation is preserved\, the more computationally e
fficient an algorithm is. With this in mind we wil
l explore the applicability of explicit stabilised
Runge-Kutta methods for optimization and sampling
problems\; These methods are optimal in terms of
their stability properties within the class of exp
licit integrators and we will show that when used
as optimization methods they match the optimal con
vergence rate of the conjugate gradient method for
quadratic optimization problems. Numerical invest
igations indicate that in the general case they a
re able to outperform state of the art optimizatio
n methods like Nesterov's accelerated method. In
the case of sampling\, we will investigate their
applicability to Bayesian inverse problems arising
in computational imaging. An additional complexit
y arises there due to the fact that many of them c
ontain non-differentiable terms\, which when regul
arised lead to extra stiffness\, hence making expl
icit stabilised methods even more suitable for the
se problems as illustrated by a range of numerical
experiments that show that for the same computati
onal cost as current state of the arts methods\, e
xplicit stabilised methods deliver much better MCM
C samples. \n\nThis is joint work with Armin Eftek
hari (EPFL)\, Bart Vandereycken (Geneva)\, Gilles
Vilmart (Geneva)\, Marcelo Pereyra (Heriot-Watt) a
nd Luis Vargas (Edinburgh)
LOCATION:MR12
CONTACT:Dr Sergio Bacallado
END:VEVENT
END:VCALENDAR