BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Contributed Talk: A review of modern Bayesian and post-Bayesian sa
 mpling methods - Marina Riabiz (King's College London)
DTSTART:20260211T140000Z
DTEND:20260211T143000Z
UID:TALK243115@talks.cam.ac.uk
DESCRIPTION:Understanding and predicting complex real-world phenomena lies
  at the core of empirical sciences\, from physics and engineering to medic
 ine and Earth system science. A fundamental challenge arises from the need
  to choose the appropriate scale at which to observe and model emergent be
 havior. Different scales often give rise to distinct physical mechanisms\,
  each described by complex and computationally demanding models&mdash\;suc
 h as systems of ordinary or partial differential equations&mdash\;that mus
 t be repeatedly evaluated during calibration and inference.\nMoreover\, mo
 dels derived from empirical observations are inevitably misspecified. As a
  result\, calibration to specific instances of a problem may lead to biase
 d or unstable predictions\, a difficulty that is further exacerbated in la
 rge-data regimes and in the presence of outliers. These challenges motivat
 e the development of inference methodologies that are both computationally
  scalable and robust to model mismatch.\nIn this talk\, I will present an 
 overview of recent advances in computational statistics that leverage AI-b
 ased tools to address these issues\, with a focus on Bayesian and post-Bay
 esian inference. The latter provides a principled framework for reasoning 
 under model misspecification\, enabling more reliable uncertainty quantifi
 cation across scales. I will review both stochastic approaches&mdash\;such
  as Langevin and mean-field Langevin dynamics&mdash\;and deterministic met
 hods based on variational principles\, including Stein variational gradien
 t descent and neural-network-based score and diffusion models. These metho
 ds enable sampling from target distributions that may be available only up
  to a normalization constant or defined implicitly as optimizers of loss f
 unctions. Finally\, I will highlight recent work aimed at improving the qu
 ality and robustness\, of the derived samples and resulting inferences.
LOCATION:Seminar Room 1\, Newton Institute
END:VEVENT
END:VCALENDAR
