University of Cambridge > Talks.cam > Language Technology Lab Seminars > Unlikelihood-training and Back-training for robust natural language understanding

Unlikelihood-training and Back-training for robust natural language understanding

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Marinela Parovic.

Language models are known to be good at generalization and memorization. These abilities mean that a language model can be directly be used as a knowledge base, e.g., a language model could easily fill the blank in the sentences “The capital of Canada is BLANK ” and “BLANK is the capital of Canada;” with Ottawa, even if these exact syntactic constructions are never seen during training, a task that requires both generalization and memorization. But we also observe that complex phenomena such as negation are commonly ignored by language models, e.g., the model would still predict Ottawa as the answer to “The capital of Canada is not BLANK ”. I will introduce a new training procedure and objective called “unlikelihood training with reference” in order to build language models that understand negation.

In the second part of the talk, I will show that pretrain and fine-tune paradigm breaks in an out-of-distribution setting. For example, question answering and generation models trained on Natural Questions do not generalize to other domains such as education or bio-medical. I will introduce a new technique called back-training that exploits unsupervised data in the target domains much more efficiently than self-training.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity