University of Cambridge > > NLIP Seminar Series > Towards Out-of-distribution generalization in NLP

Towards Out-of-distribution generalization in NLP

Add to your list(s) Download to your calendar using vCal

  • UserProf. He He (New York Univeristy)
  • ClockFriday 28 January 2022, 13:00-14:00
  • HouseVirtual (Zoom).

If you have a question about this talk, please contact Georgi Karadzhov.

Real-world NLP models must work well when the test distribution differs from the training distribution. While we have made great progress in natural language understanding thanks to large-scale pre-training, current models still take shortcuts and rely on spurious correlations in specific datasets. In this talk, I will discuss the role of pre-training and data in model robustness to distribution shifts. In particular, I will describe how pre-trained models avoid learning spurious correlations, when data augmentation helps and hurts, and how large language models can be leveraged to improve few-shot learning.

Please note unusual time! Georgi Karadzhov is inviting you to a scheduled Zoom meeting.

Topic: NLIP Seminar 28.01.2022 Time: Jan 28, 2022 01:00 PM London

Join Zoom Meeting

Meeting ID: 956 3929 4602 Passcode: 662202 One tap mobile +13126266799,,95639294602# US (Chicago) +13462487799,,95639294602# US (Houston)

Dial by your location +1 312 626 6799 US (Chicago) +1 346 248 7799 US (Houston) +1 669 900 6833 US (San Jose) +1 929 205 6099 US (New York) +1 253 215 8782 US (Tacoma) +1 301 715 8592 US (Washington DC) Meeting ID: 956 3929 4602 Find your local number:

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity