University of Cambridge > > Isaac Newton Institute Seminar Series > Learning and inference in probabilistic submodular models

Learning and inference in probabilistic submodular models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact

STSW01 - Theoretical and algorithmic underpinnings of Big Data

I will present our work on inference and learning in discrete probabilistic models defined through submodular functions. These generalize pairwise graphical models and determinantal point processes, express natural notions such as attractiveness and repulsion and allow to capture richly parameterized, long-range, high-order dependencies. The key idea is to use sub- and supergradients of submodular functions, and exploit their combinatorial structure to efficiently optimize variational upper and lower bounds on the partition function. This approach allows to perform efficient approximate inference in any probabilistic model that factorizes into log-submodular and log-supermodular potentials of arbitrary order. Our approximation is exact at the mode for log-supermodular distributions, and we provide bounds on the approximation quality of the log-partition function with respect to the curvature of the function. I will also discuss how to learn log-supermodular distributions via bi-level optimisation. In particular, we show how to compute gradients of the variational posterior, which allows integrating the models into modern deep architectures. This talk is primarily based on joint work with Josip Djolonga

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2018, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity