COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Centre for Human Inspired AI Early Career Committee > Larger and more instructable language models become less reliable
Larger and more instructable language models become less reliableAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Alva Markelius. At this year’s first CHIA Early Career Community Seminar, Jose Hernandez Orallo will introduce his recent study published in Nature, which reveals that, as LLM families scaled up and shaped up, their models didn’t become more reliable. This seminar talk introduces some methodological innovations from the perspective of AI evaluation. This talk is part of the Centre for Human Inspired AI Early Career Committee series. This talk is included in these lists:Note that ex-directory lists are not shown. |
Other listsPhysics of Living Matter Part III course (PLM) Jesus College Graduate Society Graduates' and Fellows' Symposia Ignored Arab Christian Voices: Contextual Theology in the Era of Colonial ModernityOther talksMagma cracking through the lithosphere Active plate tectonics in the Paleoarchean Predicting recurrence of prostate cancer: a Bayesian approach Optimizing the diffusion for sampling with overdamped Langevin dynamics |