COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Statistics > Provable representation learning in deep learning
Provable representation learning in deep learningAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Dr Sergio Bacallado. Deep representation learning seeks to learn a data representation that transfers to downstream tasks. In this talk, we study two forms of representation learning: supervised pre-training and self-supervised learning. Supervised pre-training uses a large labeled source dataset to learn a representation, then trains a classifier on top of the representation. We prove that supervised pre-training can pool the data from all source tasks to learn a good representation which transfers to downstream tasks with few labeled examples. Self-supervised learning creates auxiliary pretext tasks that do not require labeled data to learn representations. These pretext tasks are created solely using input features, such as predicting a missing image patch, recovering the colour channels of an image, or predicting missing words. Surprisingly, predicting this known information helps in learning a representation effective for downstream tasks. We prove that under a conditional independence assumption, self-supervised learning provably learns representations. This talk is part of the Statistics series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsFeminism Philomathia Forum 2017 StatisticsOther talksNatural Beekeeping (Bees for Bees Sake) Design and Control of Structures that Adapt to Loads through Large Shape Changes An ambient vibration-based bridge scour monitoring system The shadow of slavery: measuring miscegenation in the early 20th century Titan's Atmosphere in the Cassini Era and Beyond Spiritual Happiness Course |