University of Cambridge > > Churchill CompSci Talks > Attention in sequence-to-sequence models

Attention in sequence-to-sequence models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Matthew Ireland.

Sequence-to-sequence models are used to map sequential data from one domain to another, and are at the core of machine translation (MT) technologies such as Google Translate. In this talk, we build up the major milestones of neural machine translation. This talk will also introduce the attention mechanism, which forms the basis of virtually all state-of-the-art machine translation techniques we have today.

This talk is part of the Churchill CompSci Talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity