# A Poisson Process Model for Monte Carlo

Zoom link available upon request (it is sent out on our mailing list, eng-mlg-rcc [at] lists.cam.ac.uk). Sign up to our mailing list for easier reminders via lists.cam.ac.uk.

What is so fishy about rejection sampling, importance sampling, and the Gumbel-max trick? It turns out that in each case, Poisson processes are lurking beneath the surface!

Poisson processes are the most fundamental point processes (“distributions over randomly placed points in space”) in probability theory. As such, they are widely used for modeling discrete phenomena across mathematics (e.g., the occurrence of primes along the integers in number theory), the sciences (e.g., neural spike trains or the occurrence of earthquakes), and engineering (incoming calls at a call center). However, a lesser-known fact about them is that they also provide a unifying view of (non-Markov chain) Monte Carlo, allowing us to recast sampling as a search problem over Poisson processes.

My talk aims to provide a brief introduction to Poisson processes and showcase four operations that preserve them: restriction, superposition, thinning, and mapping. Pleasingly, all the above-mentioned sampling algorithms follow the same three-step recipe:

1. Take a base Poisson process. 2. Modify it using one of the four operations. 3. Search over the points of the modified process to find our sample.

In my talk, I will explain the details of how this general recipe gives rise to rejection sampling, the Gumble-max trick, importance sampling (and more generally, A* sampling), and greedy Poisson rejection sampling; with a focus on their computational aspects, such as numerical stability and runtime. Finally, I will provide insight into how we can use them for data compression via channel simulation/relative entropy coding.

This talk is part of the Machine Learning Reading Group @ CUED series.