University of Cambridge > Talks.cam > NLIP Seminar Series > Improving Speech Translation with Linguistically-Informed Representations

Improving Speech Translation with Linguistically-Informed Representations

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Guy Aglionby.

Join Zoom Meeting https://cl-cam-ac-uk.zoom.us/j/99290674837?pwd=cEYvd0pSSXgvN2VERUpmblZ3QzJiZz09

Meeting ID: 992 9067 4837 Passcode: 999939

End-to-end models for speech translation (ST) more tightly couple speech recognition (ASR) and machine translation (MT) than a traditional cascade of separate ASR and MT models, with simpler model architectures and the potential for reduced error propagation. However, several challenges still remain to make end-to-end models perform as well as cascaded models, particularly in low-resource scenarios. Further, in the move towards more task-agnostic neural architectures, inductive biases for each task have largely been removed. In this talk, I will discuss some important considerations for building speech translation models (and why we should still draw inspiration from cascades), as well as three methods to re-introduce model biases through phonologically-informed representations and the situations where they are most beneficial.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity