University of Cambridge > Talks.cam > Astro Data Science Discussion Group > An Overview of Probabilistic Latent Variable Models

An Overview of Probabilistic Latent Variable Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact km723.

This talk showcases some interesting probabilistic interpretations of dimensionality reduction and unsupervised representation learning algorithms and presents the common statistical modelling assumptions that underpin these algorithms. Specifically, we’ll look at methods such as PCA , GMM, ICA , FA, VAE and GPLVM and show how these share the same modelling framework. If there’s interest, I’ll also talk about a large class of other algorithms that also fit into this framework (such as t-SNE, UMAP , isomap and MDS ). I’ll cover some newer methods like contrastive learning and show how these fit in with classical latent variable models.

This talk is part of the Astro Data Science Discussion Group series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity