University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > An Introduction to In-Context Learning

An Introduction to In-Context Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact .

Teams link available upon request (it is sent out on our mailing list, eng-mlg-rcc [at] lists.cam.ac.uk). Sign up to our mailing list for easier reminders via lists.cam.ac.uk.

In-context learning (ICL) has emerged as a powerful capability of large language models (LLMs), allowing them to adapt to new tasks without explicit parameter updates. This talk begins with an introduction to meta-learning and neural processes, which lay the foundation for ICL . We then move on to transformer based ICL where the model can be trained from scratch or leverage pre-trained LLMs. To attempt to understand why ICL works, we discuss its connections to Bayesian inference, kernel regression, and gradient descent. Finally, we examine potential safety concerns in ICL , highlighting risks and challenges in reliable AI deployment.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity