University of Cambridge > Talks.cam > Language Technology Lab Seminars > Revisiting Cross-Lingual Transfer Learning

Revisiting Cross-Lingual Transfer Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Panagiotis Fytas.

Given downstream training data in one language (typically English), the goal of cross-lingual transfer learning is to perform the task in another language. Existing approaches have been broadly classified into 3 categories: zero-shot (fine-tune a multilingual language model in English and zero-shot transfer into the target language), translate-train (translate the training data into the target language through MT and fine-tune a multilingual language model), and translate-test (translate the evaluation data into English through MT and use an English model). Prior work mostly finds that translate-train performs best followed by zero-shot and translate-test, and focuses on improving multilingual models. In this 3-part talk, we will revisit some of the fundamentals of this problem, challenging the conventional wisdom in the area. First, we will see that a large part of the improvements from using parallel data can be attributed to explicitly modeling parallel interactions, and similar improvements can be obtained using synthetic data. Second, we will revisit the integration of MT into the pipeline, showing that the potential of translate-test has been largely underestimated. Finally, we will see how creating multilingual benchmarks through translation, as it is commonly done, can result in evaluation artifacts, which calls to reconsider some prior findings.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity