University of Cambridge > > Machine Learning Reading Group @ CUED > Combinatorial Stochastic Processes in Bayesian Nonparametrics

Combinatorial Stochastic Processes in Bayesian Nonparametrics

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Rowan McAllister.


The tutorial will cover the following topics: Random partitions. Random tree structures. Random fragmentation and coagulation processes. These models have found usage in clustering and density modeling applications. I will show you that each one of these structures may be constructed from the previous. Along the way, we will also cover how to study these objects from the perspective of random measures, and we’ll see why that perspective is so helpful, not least because it helps us to better understand exchangeability.

The next topic is random feature allocations. We will develop this theory, which will seem to merely mimic the theory of random partitions. However, I will in fact show you how to construct random feature allocations from random partitions. This opens the door to constructing random feature allocations from random tree structures, random fragmentations/coagulations, and so on. The potential applications for these models will be clear.

The theme of the talk is using simple models to construct more elaborate ones. The final bits of the talk are based on current work with Zoubin Ghahramani and Daniel M. Roy.

For preliminary reading:
  1. Ch. 2 of The Nested Chinese Restaurant Process and Bayesian Nonparametric Inference of Topic Hierarchies by Blei, Griffiths, and Jordan reviews the Chinese restaurant process, so it will be very helpful if you have little background on these things. Ch. 3 covers nested Chinese restaurant processes in more detail than we will, so not necessary but may be good if you’re keen.
  2. For a background on the random measure perspective, I think Ch. 2 of Sharing Clusters Among Related Groups: Hierarchical Dirichlet Processes by Teh, Jordan, Beal, & Blei serves our purposes well.
  3. For the feature allocations the obvious reference is The Indian buffet process: An introduction and review by Griffiths and Ghahramani. I would say Chs. 2 & 3 are excellent motivation for appreciating the similar yet distinct approaches of partitions vs. feature allocations.
  4. For the random measure perspective in this case, read the first 3 or 4 chapters of Hierarchical beta processes and the Indian buffet process by Thibaux & Jordan.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity