COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Data Intensive Science Seminar Series > The Transformer (OOD) House of Cards
The Transformer (OOD) House of CardsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Sri Aitken. The Transformer architecture has certainly been the landmark deep learning model in recent years, enabling seamless integration of information across many different modalities and surprisingly insightful behaviours emerging at scale. However, in spite of the very challenging problems that are now within reach of Transformers, they are also seemingly unable to robustly perform when faced with variations of, comparatively, much simpler problems. We attribute this to shaky foundations: there are certain kinds of computations that are always going to be out of reach of Transformers, no matter how well we train them—and a lot of such computations occur outside of the distribution the model was trained on. In this talk, I will outline some of these cracks in the system we’ve discovered, as well as ideas for the way forward towards building generally intelligent agents of the future. This talk is part of the Data Intensive Science Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsNetworking with IndieBio: The world's largest seed biotech accelerator Imaging & Mathematics: networking across disciplines Personal listOther talksThe story of nature: A human history Not too big and not too small – Bicycle peptides for precision targeted therapies Title TBC Supporting Digital Humanities Students What can Arctic staircases tell us about ocean mixing? Local times of Brownian motion indexed by the Brownian tree |