University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > LP relaxations for MAP inference

LP relaxations for MAP inference

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Yingzhen Li.

For discrete graphical models, we consider the combinatorial optimization challenge of finding a mode configuration of variables, that is a setting of all variables that has highest probability, also known as maximum a posteriori (MAP) inference. We shall provide a brief introduction to a popular method that frames the problem as an integer linear program then relaxes this to a linear program (LP) over continuous variables. For computational efficiency, the space over which this LP is performed is typically relaxed to an outer bound called the local polytope which enforces only pairwise consistency. We shall also discuss tighter relaxations that have recently been explored with some success, and touch on message passing methods that may be used to try to solve the problem efficiently.

readings:

Wainwright and Jordan 2008 Graphical models, exponential families and variational inference Section 8

David Sontag’s phd thesis chapter 2

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity