University of Cambridge > Talks.cam > Causal Inference Reading Group > Natural Experiments in NLP and Where to Find Them

Natural Experiments in NLP and Where to Find Them

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Martina Scauda.

Zoom Link available upon request

In training language models, training choices—such as the random seed for data ordering or the token vocabulary size—significantly influence model behaviour. Answering counterfactual questions like “How would the model perform if this instance were excluded from training?” is computationally expensive, as it requires re-training the model. Once these training configurations are set, they become fixed, creating a “natural experiment” where modifying the experimental conditions incurs high computational costs. Using econometric techniques to estimate causal effects from observational studies enables us to analyse the impact of these choices without requiring full experimental control or repeated model training. In this talk, I will present our paper, Causal Estimation of Memorisation Profiles (Best Paper Award at ACL 2024 ), which introduces a novel method based on the difference-in-differences technique from econometrics to estimate memorisation without requiring model re-training. I will also cover the necessary econometric concepts and key literature on memorisation in language models.

Suggested readings:

Counterfactual memorization in neural language models (https://proceedings.neurips.cc/paper_files/paper/2023/file/7bc4f74e35bcfe8cfe43b0a860786d6a-Paper-Conference.pdf)

Quantifying memorization across neural language models (https://arxiv.org/pdf/2202.07646)

This talk is part of the Causal Inference Reading Group series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity