University of Cambridge > > Semantics Lunch (Computer Laboratory) > A Model-Learner Pattern for Bayesian Reasoning

A Model-Learner Pattern for Bayesian Reasoning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Peter Sewell.

A Bayesian model consists of a pair of probability distributions, known as the prior and sampling distributions. A wide range of fundamental machine learning tasks, including regression, classification, clustering, and many others, can all be seen as Bayesian models. We propose a new probabilistic programming abstraction, a typed Bayesian model, which is a pair of probabilistic functions for the prior and sampling distributions. A sampler for a model is an algorithm to compute synthetic data from its sampling distribution, while a learner for a model is an algorithm for probabilistic inference on the model. Models, samplers, and learners form a generic programming pattern for model-based inference. They support the uniform expression of common tasks including model testing, and generic compositions such as mixture models, evidence-based model averaging, and mixtures of experts. A formal semantics supports reasoning about model equivalence and implementation correctness. By developing a series of examples and three learner implementations based on exact inference, belief-propagation, and Markov chain Monte Carlo, we demonstrate the broad applicability of this new programming pattern.

The talk is based on joint work with Mihhail Aizatulin (Open University), Johannes Borgstroem (Uppsala University), Guillaume Claret (MSR), Thore Graepel (MSR), Aditya Nori (MSR), Sriram Rajamani (MSR), and Claudio Russo (MSR).


This talk is part of the Semantics Lunch (Computer Laboratory) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity