COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Language Technology Lab Seminars > Can Sparsity Lead to Efficient LLMs?
Can Sparsity Lead to Efficient LLMs?Add to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Panagiotis Fytas. The rapid advancements in Large Language Models (LLMs) have revolutionized various natural language processing tasks. However, the substantial size of LLMs presents significant challenges in training, fine-tuning, and deployment. In this talk, I will discuss how sparsity, a fundamental characteristic in neural networks, can be leveraged to enhance LLM efficiency. The presentation will cover recent advances in LLM pruning, parameter-efficient fine-tuning, centered on the principle: Not Every Layer in LLMs is Worth Equal Computing. This talk is part of the Language Technology Lab Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsmas270 Gift Ideas CU Global HealthOther talksVariations of K-moduli for del Pezzo surfaces LMB Seminar: Laser phase plate in cryo-EM and cryo-ET Minimal projective bundle dimension and K-stability Homogenisation of resonators via a two-scale transform, and generalisations Understanding the cancer genome base-by-base |