University of Cambridge > > NLIP Seminar Series > GenBench -- State-of-the-art generalisation research in NLP

GenBench -- State-of-the-art generalisation research in NLP

Add to your list(s) Download to your calendar using vCal

  • UserDieuwke Hupkes (Facebook AI Research, ELLIS) World_link
  • ClockFriday 27 January 2023, 12:00-13:00
  • HouseVirtual (Zoom).

If you have a question about this talk, please contact Michael Schlichtkrull.


Good generalisation is of utmost importance for any artificial intelligence model. Traditionally, the generalisation capabilities of machine learning models are evaluated using random train/test splits. However, numerous recent studies have exposed substantial generalisation failures in models that perform well on such random within-distribution splits. So, if random splitting is not good for measuring how robustly models generalise to different scenarios, how should we evaluate that? In this talk, I present a taxonomy for characterising and understanding generalisation in NLP , and use it to analyse over 400 papers of the ACL anthology.


Dieuwke Hupkes is a research scientist at FAIR . Previously, she was a post-doctoral researcher at the University of Amsterdam, where she also did her PhD. In her research, she studies neural models of language processing, in which she tries to incorporate knowledge from linguistics and philosophy of language. She is particularly excited about what neural models might teach us about language and human language processing.

Topic: NLIP Seminar Time: Jan 27, 2023 12:00 PM London

Join Zoom Meeting

Meeting ID: 943 3037 5053 Passcode: 768471

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity