University of Cambridge > > Language Technology Lab Seminars > Can Sparsity Lead to Efficient LLMs?

Can Sparsity Lead to Efficient LLMs?

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Panagiotis Fytas.

The rapid advancements in Large Language Models (LLMs) have revolutionized various natural language processing tasks. However, the substantial size of LLMs presents significant challenges in training, fine-tuning, and deployment. In this talk, I will discuss how sparsity, a fundamental characteristic in neural networks, can be leveraged to enhance LLM efficiency. The presentation will cover recent advances in LLM pruning, parameter-efficient fine-tuning, centered on the principle: Not Every Layer in LLMs is Worth Equal Computing.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity