![]() |
University of Cambridge > Talks.cam > NLIP Seminar Series > Efficient Pre-Training and Inference Methods for Language Models
Efficient Pre-Training and Inference Methods for Language ModelsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Suchir Salhan. This talk has been canceled/deleted This seminar is rescheduled to a later date in November. Abstract: With the advancement of large language models (LLMs), their high training and inference costs have become a major bottleneck. This report focuses on cutting-edge algorithms to improve LLM efficiency. For pre-training, we will discuss data optimization methods that accelerate training by enhancing data quality. For inference, we will explore model compression (knowledge distillation) and architecture optimization (efficient attention mechanisms) as pathways to next-generation efficient model design. This talk is part of the NLIP Seminar Series series. This talk is included in these lists:This talk is not included in any other list Note that ex-directory lists are not shown. |
Other listsEarly Modern British and Irish History Seminar Israeli Film Club crsdOther talksPharmacology Seminar Series: Nikita Gamper, Peripheral Gate in Somatosensory System Free Afternoon Cambridge Head and Neck Cancer Symposium, 25th Sep 2025 Modeling dissolved Pb concentrations in the Western Arctic Ocean: the continued legacy of anthropogenic pollution Probing early star formation and ionized bubble growth with JWST Scattering in Field Theory |