University of Cambridge > Talks.cam > Language Technology Lab Seminars > Adapters in Transformers. A New Paradigm for Transfer Learning…?

Adapters in Transformers. A New Paradigm for Transfer Learning…?

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Marinela Parovic.

Adapters have recently been introduced as an alternative transfer learning strategy. Instead of fine-tuning all weights of a pre-trained transformer-based model, small neural network components are introduced at every layer. While the pre-trained parameters are frozen, only the newly introduced adapter weights are fine-tuned, achieving an encapsulation of the down-stream task information in designated parts of the model. In this talk we will provide an introduction to adapter-training in natural language processing. We will go into detail on how the encapsulated knowledge can be leveraged for compositional transfer learning, as well as cross-lingual transfer. We will briefly touch the efficiency of adapters in terms of trainable parameters as well as (wall-clock) training time. Finally, we will provide an outlook to recent alternative adapter approaches and training strategies.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity