University of Cambridge > Talks.cam > Language Technology Lab Seminars > From Translation Divergences to Structure-aware Neural Machine Translation

From Translation Divergences to Structure-aware Neural Machine Translation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Marinela Parovic.

Languages present a wide range of structures for expressing similar meanings. This variation has historically motivated the integration of linguistic structure into machine translation (MT) models, so as to abstract away from realization differences, but such integration has been receiving less attention since the introduction of neural MT models. In my talk I will discuss ongoing work we’re carrying out in the lab, on characterizing divergences and their impact on the performance in today’s neural MT models, as well as on two approaches for integrating syntactic structure into MT models to address this gap.

Joint work with many lab members and collaborators, notably Leshem Choshen, Dmitry Nikolaev, Asaf Yehudai and Lior Fox.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity