Large Language Models, Model Collapse, and the Conservation of Information
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Pietro Lio.
LT1
Do Large Language Models (LLMs) think and reason? Are they perpetual information machines, producing endless coherent and correct text from finite training data? We explore how LLMs work and whether they produce rational thought and endless information. We show how theoretical considerations and experimental results from philosophy, statistics, information theory, and machine learning argue against the thesis that LLMs are rational, information-generating entities.
This talk is part of the Foundation AI series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|