University of Cambridge > Talks.cam > Foundation AI > Toward a Theoretical Understanding of Self-Supervised Learning in the Foundation Model Era

Toward a Theoretical Understanding of Self-Supervised Learning in the Foundation Model Era

Download to your calendar using vCal

If you have a question about this talk, please contact Pietro Lio .

Lecture Theatre 1

Self-supervised learning (SSL) has become the cornerstone of modern foundation models, enabling them to learn powerful representations from vast amounts of unlabeled data. By designing auxiliary tasks on raw inputs, SSL removes the reliance on human-provided labels and underpins the pretraining–finetuning paradigm that has reshaped machine learning beyond the traditional empirical risk minimization framework. Despite its remarkable empirical success, its theoretical foundations remain relatively underexplored. This gap raises fundamental questions about when and why SSL works, and what governs its generalization and robustness. In this talk, I will introduce representative SSL methodologies widely used in foundation models, and then present a series of our recent works on the theoretical understanding of SSL , with a particular focus on contrastive learning, masked autoencoders and autoregressive learning.

This talk is part of the Foundation AI series.

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity