University of Cambridge > Talks.cam > Language Technology Lab Seminars > Understanding LLMs via their Generative Successes and Shortcomings.

Understanding LLMs via their Generative Successes and Shortcomings.

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Panagiotis Fytas.

Generative capabilities of large language models have grown beyond the wildest imagination of the broader AI research community, leading many to speculate whether these successes may be attributed to the training data or different factors concerning the model. I will present some work from my group which has revealed unique successes and shortcomings in the generative capabilities of LLMs, on knowledge-oriented tasks, tasks with human and social utility and tasks that reveal more than surface-level understanding of language. I will also discuss some aspects of language generation itself and why algorithms like truncation sampling have been so successful.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity