University of Cambridge > > Language Technology Lab Seminars > Hidden Biases. Ethical Issues in NLP, and What to Do about Them

Hidden Biases. Ethical Issues in NLP, and What to Do about Them

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Qianchu Liu.

A joint Leverhulme Centre for the Future of Intelligence (CFI) and Language Technology Lab (LTL) seminar on Human-Centric AI Technologies

Texts reflect the authors’ demographic properties and biases, which in turn get magnified by statistical NLP models. This has unintended consequences for our analysis: if we do not pay attention to the biases contained, we can easily draw the wrong conclusions, and create disadvantages for our users.

In this talk, I will discuss several types of biases that affect NLP models, what their sources are, and potential counter measures. - bias stemming from data, i.e., selection bias (if our texts do not adequately reflect the population we want to study), label bias (if the labels we use are skewed), and semantic bias (the latent stereotypes encoded in embeddings). - biases deriving from the models themselves, i.e., their tendency to amplify any imbalances that are present in the data. - design bias, i.e., the biases arising from our (the researchers) decisions which topics to analyze, which data sets to use, and what to do with them.

For each bias, I will provide examples and discuss the possible ramifications for a wide range of applications, and who various ways to address and counteract these biases, ranging from simple labeling considerations to new types of models.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity