University of Cambridge > > CUED Speech Group Seminars > Attention Forcing: Improving attention-based sequence-to-sequence models

Attention Forcing: Improving attention-based sequence-to-sequence models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Kate Knill.

Autoregressive sequence-to-sequence models with attention mechanisms have achieved state-of-the-art performance in various tasks including Neural Machine Translation (NMT), Automatic Speech Recognition (ASR) and Text-To-Speech (TTS). This talk introduces attention forcing, a group of training approaches, to address a training-inference mismatch. For autoregressive models, the most standard training approach, teacher forcing, guides a model with the reference output history. However during inference the generated output history must be used. To reduce the mismatch, attention forcing guides the model with the generated output history and reference attention. Extensions of this general framework will be introduced for more challenging applications. For example, most approaches addressing the training-inference mismatch are incompatible with parallel training, which is essential for Transformer models. In contrast, the parallel version of attention forcing supports parallel training, and hence Transformer models. The effectiveness of attention forcing will be demonstrated by the experiments in TTS and NMT .

This talk is part of the CUED Speech Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity