University of Cambridge > Talks.cam > Language Technology Lab Seminars > Can Language Models Learn Truthfulness?

Can Language Models Learn Truthfulness?

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Panagiotis Fytas.

Today’s large language models (LLMs) are trained on vast amounts of text from the internet, which contains both factual and misleading information about the world. Can language models discern truth from falsehood in this contradicting data? This talk introduces a hypothesis for how LLMs can model truthfulness. Inspired by the agent-model view of language models, we hypothesize that they can cluster truthful text by modeling a truthful persona: a group of agents that are likely to produce truthful text and share similar features. I will discuss both results on real data and controlled experiments on synthetic data that support the hypothesis. Overall, our findings suggest that models can exploit hierarchical structures in the data to learn abstract concepts like truthfulness.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity