University of Cambridge > Talks.cam > Computational Neuroscience > Computational Neuroscience Journal Club

Computational Neuroscience Journal Club

Add to your list(s) Download to your calendar using vCal

  • UserKris Jensen and Wayne Soo
  • ClockTuesday 31 May 2022, 15:00-16:30
  • HouseOnline on Zoom.

If you have a question about this talk, please contact Jake Stroud.

Please join us for our fortnightly journal club online via zoom where two presenters will jointly present a topic together. The next topic is ‘Transformers in computational neuroscience – advances and future directions’ presented by Kris Jensen and Wayne Soo.

Zoom information: https://us02web.zoom.us/j/84958321096?pwd=dFpsYnpJYWVNeHlJbEFKbW1OTzFiQT09 Meeting ID: 849 5832 1096 Passcode: 506576

Summary: Since first rising to prominence 5 years ago, models based on the ‘transformer’ architecture have taken the field of machine learning by storm. This has included impressive advances on tasks ranging from image recognition and language modelling to predicting protein folding and automatically generating code. Meanwhile, transformers have continued to play a fairly minor role in systems and computational neuroscience, although some recent work has demonstrated potential uses of transformers both for neural data analysis and as explicit models of neural circuits. In this tutorial/discussion, we will first provide an introduction to the transformer architecture and go through a notebook implementing a minimal ‘vision transformer’. We will then talk briefly about some of the many impressive transformer-based advances in machine learning. Finally, we will highlight recent work that uses transformers in neuroscience and discuss the possible roles such attention-based architectures in the field moving forwards.

Relevant literature (machine learning): Vaswani et al. (2017): “Attention is all you need”

Dosovitskiy et al. (2020): “An image is worth 16×16 words: Transformers for image recognition at scale”

Relevant literature (neuroscience): Ye & Pandarinath (2021): “Representation learning for neural population activity with Neural Data Transformers”

Whittington et al. (2021): “Relating transformers to models and neural representations of the hippocampal formation ”

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2022 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity