University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Linear Attention for Efficient Transformers

Linear Attention for Efficient Transformers

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Xianda Sun.

Attention may be all you need, but that doesn’t mean it comes cheap. The Achilles’ Heel of the wildly successful Transformer architecture is its quadratic time- and space-complexity scaling with respect to the length of the input token sequence. A diverse taxonomy of methods has been proposed to remedy this bottleneck and recover linear complexity, including making attention local, sparse or low rank. We will explore the respective strengths and weaknesses of these approaches, discuss theoretical guarantees (or the lack thereof), and consider possible directions for future work.

Suggested reading:
  1. Attention is all you need (https://arxiv.org/abs/1706.03762). Seminal Transformers paper.
  2. Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention (https://arxiv.org/abs/2006.16236). Among the first papers on low-rank attention.
  3. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows (https://arxiv.org/abs/2103.14030). Popular example of local attention.
  4. Big Bird: Transformers for Longer Sequences (https://arxiv.org/abs/2007.14062). Example of the benefits of using a combination of techniques.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity