This version of Talks.cam will be replaced by 1 July 2026, further information is available on the UIS Help Site
 

University of Cambridge > Talks.cam > Foundation AI > Large Language Models, Model Collapse, and the Conservation of Information

Large Language Models, Model Collapse, and the Conservation of Information

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Pietro Lio.

LT1

Do Large Language Models (LLMs) think and reason? Are they perpetual information machines, producing endless coherent and correct text from finite training data? We explore how LLMs work and whether they produce rational thought and endless information. We show how theoretical considerations and experimental results from philosophy, statistics, information theory, and machine learning argue against the thesis that LLMs are rational, information-generating entities.

This talk is part of the Foundation AI series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2026 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity