University of Cambridge > Talks.cam > Language Technology Lab Seminars > Prosocial Language Models

Prosocial Language Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Panagiotis Fytas.

Large language models, such as GPT -4, have marked a significant advancement in the field of natural language processing, achieving near-human performance across a variety of tasks with minimal to no additional training data. The remarkable capabilities of these models can be attributed to their substantial parameter counts, often reaching into the hundreds or thousands of millions, and the extensive datasets sourced from the web for their pre-training. Despite their successes, the very characteristics that empower these models also render them susceptible to mirroring web-based biases and antisocial behaviors. Such reflections pose considerable challenges in deploying these models in real-world scenarios, particularly in socially sensitive applications. In response, our laboratory focuses on developing techniques for the post hoc mitigation of these antisocial tendencies, allowing for the enforcement of prosocial behaviors during model inference without the need for resource-intensive retraining. This presentation will delve into our latest efforts to reduce bias and enhance alignment with human ethical standards in language models through inference-time interventions.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity