University of Cambridge > Talks.cam > NLIP Seminar Series > Expectations vs. Reality: Lessons learned from Working on Toxic Content Detection in NLP

Expectations vs. Reality: Lessons learned from Working on Toxic Content Detection in NLP

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Michael Schlichtkrull.

Join Zoom Meeting https://cl-cam-ac-uk.zoom.us/j/99831805544?pwd=NUMrTGE4K2U3V2h0NlhtTHNsOG5rQT09

Meeting ID: 998 3180 5544 Passcode: 779252

In order to improve the online moderation process, there has been an increasing need for building toxic language detection tools that do not only flag bad words, but rather filter out toxic content in a more nuanced fashion. In order to train such models, it is essential to acquire data of high quality. However, in the absence of universal definitions of terms such as hate speech, and given the typical data collection process based on keywords, available corpora are usually sparse and imbalanced which makes the detection process challenging for current machine learning techniques.

In this talk, I will present my findings when working on (1) the construction of multilingual resources for robust toxic language and hate speech detection, (2) the study of bias in toxic language detection, and (3) the assessment of toxicity and harmful biases within Large Pre-trained Language Models (PTLMs) which are at the core of major NLP systems.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity