University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > An Instability in Variational Methods for Learning Topic Models

An Instability in Variational Methods for Learning Topic Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact INI IT.

STSW01 - Theoretical and algorithmic underpinnings of Big Data

Topic models are extremely useful to extract latent degrees of freedom form large unlabeled datasets. Variational Bayes algorithms are the approach most commonly used by practitioners to learn topic models. Their appeal lies in the promise of reducing the problem of variational inference to an optimization problem. I will show that, even within an idealized Bayesian scenario, variational methods display an instability that can lead to misleading results. [Based on joint work with Behroz Ghorbani and Hamid Javadi]

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity