COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > NLIP Seminar Series > Improving Model Robustness for Natural Language Inference
Improving Model Robustness for Natural Language InferenceAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Michael Schlichtkrull. Abstract: Natural Language Inference (NLI) models are known to learn from biases within their training data, impacting how well the models generalise to other unseen datasets. Most methods to improve model robustness focus on preventing models learning from these biases, which can result in restrictive models and lower performance. We explore a range of alternative techniques to improve model robustness, including training models with human explanations, introducing a new logical reasoning framework, and generating domain-targeted data using GPT3 . We measure robustness by training models on SNLI and testing performance on MNLI , a challenging robustness setting where most prior work shows limited improvements. Bio: Joe is a 3rd year PhD student at Imperial College London supervised by Marek Rei. His research focuses on creating more robust NLP models that generalise better to unseen, out-of-distribution datasets. Joe is a recipient of the 2023 Apple Scholars in AI/ML PhD fellowship. This talk is part of the NLIP Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCambridge Genomic Services Seminars Horizon Forum Education CyberneticsOther talksLearning the topology of complex systems from their dynamics One Health In Action in the Caribbean Bayesian inference in infinite dimensions Compound Memory Models |