![]() |
COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
University of Cambridge > Talks.cam > Cambridge ML Systems Seminar Series > Learning Rate Schedules, Scaling Laws, and Techniques for Pretraining LLMs
![]() Learning Rate Schedules, Scaling Laws, and Techniques for Pretraining LLMsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Sally Matthews. Large Language Model (LLM) pretraining relies on complex strategies for large-scale optimization, with the learning rate schedule being particularly important yet often following conventional rules. In this talk, I will discuss our recent NeurIPS Spotlight that investigates a simple but effective strategy: a constant learning rate followed by strategic cooldowns. Our analysis demonstrates that this approach does not only perform reliably, but it offers practical advantages as it does not require predetermined training lengths and easily allows continual training. Importantly, these findings enable more efficient scaling law experiments, as they allow for reuse of training runs and thereby substantially reduce compute and GPU hours. In a followup work, we investigate theoretical explanations of the unique behavior of such learning rate schedules, leveraging last-iterate convergence bounds which closely match real experiments. At the end of the talk, I will conclude by introducing the Swiss AI initiative (https://www.swiss-ai.org/) which deploys the world’s first national research infrastructure with 10,000 NVIDIA Grace Hopper GPUs. This initiative leverages our research innovations, such as the above, to develop state-of-the-art open and multilingual LLMs, with the goal of advancing fully transparent scientific research on foundation models. Bio: Alex Hägele is a PhD Student at EPFL in the Machine Learning and Optimization group (MLO) supervised by Martin Jaggi. Currently, he is part of the inaugural Anthropic Fellowship for AI Safety research, based in London. Previously, he completed his BSc+MSc in Computer Science at ETH Z ürich and was a visiting Student Researcher at Apple MLR in Paris. His research explores scaling behavior and training of language models, spanning optimization, data, and architectures. This talk is part of the Cambridge ML Systems Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCambridge Climate Lecture Series 2018 (#CCLS2018) Cambridge University Geographical Society Clare Hall America WeekOther talksExternal Seminar - Magdalena Bezanilla TBC Grand Rounds - Our experience treating cats with feline infectious peritonitis Predicting recurrence of prostate cancer: a Bayesian approach Welcome and Introduction From Batch to Flow: Advancing Synthetic Organic Chemistry through Technological Innovation Towards Global-scale Species Distribution Modelling |