Representation learning and its applications
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact James Fergusson.
Since 2006, deep learning research has blossomed as new tasks, architectures and techniques have revolutionised our ability to solve previously intractable problems. From Computer Vision and Natural Language Processing to playing Go and Atari, a cornerstone of these accomplishments is owed to the ability to effectively encode and create informed and informationally dense representations of data. In this talk, we explore advances in supervised, unsupervised and semi-supervised learning, which have led to the recent technologies in Google Search and the modern BERT architecture.
This talk is part of the Data Intensive Science Seminar Series series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|