A sanity check on emergent properties
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Panagiotis Fytas.
One of the frequent points in the mainstream narrative about large language models is that they have “emergent properties” (sometimes even dangerous enough to be considered existential risk to mankind). However, there is much disagreement about even the very definition of such properties. If they are understood as a kind of generalization beyond training data – as something that a model does without being explicitly trained for it – I argue that we have not in fact established the existence of any such properties, and at the moment we do not even have the methodology for doing so.
This talk is part of the Language Technology Lab Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|