University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Self-Supervised Representation Learning

Self-Supervised Representation Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact jg801.

Self-Supervised Learning (SSL), combined with deep neural network models, has seen great success recently. Representations learnt with SSL have been used to obtain state-of-the-art results on imagenet as well as text and speech classification tasks. We begin by introducing SSL , focusing on two of the most successful approaches: contrastive predictive coding (CPC) and deep InfoMax (DIM). We provide information theoretic interpretations of these approaches, and discuss the role of mutual information as a principal motivator for this framework. Next, we consider the role of identifiability in representation learning with generative models. We discuss recent results demonstrating that seemingly heuristic SSL approaches can be used to guarantee identifiability in nonlinear-ICA. These results imply an interesting link between identifiable generative models and SSL , potentially providing a principled foundation for SSL approaches.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity