University of Cambridge > Talks.cam > CUED Speech Group Seminars > Recurrent Continuous Translation Models

Recurrent Continuous Translation Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Rogier van Dalen.

Deep learning methods are well-suited for constructing distributed, continuous representations for linguistic units ranging from characters to sentences. These learnt representations come with an inherent, task-dependent notion of similarity that allows the models to overcome sparsity issues and to generalise well beyond the training domain. In this talk we extend these methods to the problem of machine translation and introduce a class of probabilistic translation models (RCTMs) that rely purely on continuous representations of the source and target sentences. We explore several model architectures and we see that the models obtain translation perplexities that are significantly lower than those of state-of-the-art alignment-based translation models. We also investigate the models’ ability to generate translations directly and solely from the underlying continuous space.

Bio

Nal is a second-year PhD student in the Computational Linguistics and Quantum groups at Oxford. Before joining Oxford, he studied CS, maths and logic at the ILLC and at Stanford. He is a recipient of the Clarendon fellowship.

This talk is part of the CUED Speech Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity