Unsupervised Representation Learning
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Konstantina Palla.
Being able to learn ‘good’ representations of data is, arguably, 90%
of the hard work for machine learning tasks. We currently have an
abundance of unlabelled data, with more being created every day.
It is therefore imperative that we can design and train representation
learning algorithms in an unsupervised setting.
In this tutorial style talk, we explore probabilistic and non-
probabilistic approaches including Principal Component Analysis,
Restricted Boltzman Machines and Autoencoders. We will also
discuss the benefits of deep architectures, and how to go about
training them.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|