COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > NLIP Seminar Series > Improving Speech Translation with Linguistically-Informed Representations
Improving Speech Translation with Linguistically-Informed RepresentationsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Guy Aglionby. Join Zoom Meeting https://cl-cam-ac-uk.zoom.us/j/99290674837?pwd=cEYvd0pSSXgvN2VERUpmblZ3QzJiZz09 Meeting ID: 992 9067 4837 Passcode: 999939 End-to-end models for speech translation (ST) more tightly couple speech recognition (ASR) and machine translation (MT) than a traditional cascade of separate ASR and MT models, with simpler model architectures and the potential for reduced error propagation. However, several challenges still remain to make end-to-end models perform as well as cascaded models, particularly in low-resource scenarios. Further, in the move towards more task-agnostic neural architectures, inductive biases for each task have largely been removed. In this talk, I will discuss some important considerations for building speech translation models (and why we should still draw inspiration from cascades), as well as three methods to re-introduce model biases through phonologically-informed representations and the situations where they are most beneficial. This talk is part of the NLIP Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsBeyond Profit Enterprise Stream Contemporary Political Theory Institute of Criminology EventsOther talksOn the Multiscale Mechanics of Plasticity and Damage in Magnesium Bias in AI Kant's 'True Politics' Understanding galactic morphology using cosmological simulations with controlled initial conditions |