University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Variational inference in graphical models: The view from the marginal polytope

Variational inference in graphical models: The view from the marginal polytope

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Shakir Mohamed.

In last week’s RCC , we saw that loopy belief propagation could be connected to a constrained variational free energy optimisation. The constraints ensured that the beliefs normalised and that they were locally consistent. This week, we’ll describe an alternate view of this optimisation which separately considers the constraints (the domain being called the marginal polytope) and the free-energy. The optimization takes place over the lower dimensional space of generalised exponential family mean parameters. This representation clarifies that there are two distinct components to variational inference algorithms: (a) an approximation to the entropy function; and (b) an approximation to the marginal polytope. This viewpoint clarifies the essential ingredients of known variational methods, and also suggests novel relaxations. Taking the “zero-temperature limit” recovers a variational representation for MAP computation as a linear program (LP) over the marginal polytope.

The material we hope to cover (and probably some extra) is covered on slides 1-13 and 23-39 of this tutorial

If you feel inclined to delve into the theory a little more, refer to this paper

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity