University of Cambridge > > NLIP Seminar Series > Improving Model Robustness for Natural Language Inference

Improving Model Robustness for Natural Language Inference

Add to your list(s) Download to your calendar using vCal

  • UserJoe Stacey (Imperial College London) World_link
  • ClockFriday 28 April 2023, 12:00-13:00
  • HouseComputer Lab, FW26.

If you have a question about this talk, please contact Michael Schlichtkrull.


Natural Language Inference (NLI) models are known to learn from biases within their training data, impacting how well the models generalise to other unseen datasets. Most methods to improve model robustness focus on preventing models learning from these biases, which can result in restrictive models and lower performance. We explore a range of alternative techniques to improve model robustness, including training models with human explanations, introducing a new logical reasoning framework, and generating domain-targeted data using GPT3 . We measure robustness by training models on SNLI and testing performance on MNLI , a challenging robustness setting where most prior work shows limited improvements.


Joe is a 3rd year PhD student at Imperial College London supervised by Marek Rei. His research focuses on creating more robust NLP models that generalise better to unseen, out-of-distribution datasets. Joe is a recipient of the 2023 Apple Scholars in AI/ML PhD fellowship.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity