![]() |
University of Cambridge > Talks.cam > Language Technology Lab Seminars > Model Merging — A Tale of Two Settings
Model Merging — A Tale of Two SettingsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Lucas Resck. Abstract: In this talk, I will introduce the emerging field of model merging: the process of combining multiple neural networks into a single model without retraining. We’ll begin with foundational concepts such as linear mode connectivity and task vectors, and explore two main settings: (1) merging models trained from scratch on the same task but with different initializations, and (2) merging models finetuned on different tasks from a shared pretrained base. I will then present a series of recent works that expand the model merging toolkit. These include the use of cycle consistency in permutation-based merging, insights into how task vectors relate to gradients, SVD -based approaches for low-rank model fusion, and the application of evolutionary algorithms to discover optimal merging coefficients. Throughout, we’ll see how these techniques can be applied in real-world scenarios, from model compression in computer vision to the synthesis of state-of-the-art LLMs for low-resource languages. Bio: Donato Crisostomi is an ELLIS PhD student at the Sapienza University of Rome & University of Cambridge, currently interning at Cohere. His research focuses on model merging and representational alignment. He currently leads the “Model Reuse” work package for the 1.5M€ project “NEXUS”. He previously held roles as a visiting researcher at the University of Cambridge, a Research Scientist at Amazon Alexa, and an Applied Scientist at Amazon Search. His research has been featured in top-tier AI conferences and journals, including CVPR , NeurIPS, ACM , ACL, and LoG. In addition to his scientific contributions, he has played an active role in the research community as the organizer of the UniReps workshop at NeurIPS, mentor at LOGML , and as a program committee member for leading conferences such as CVPR , NeurIPS, ICLR , etc. This talk is part of the Language Technology Lab Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsRomina Vuono King's Graduate Seminar THIS InstituteOther talksPanel - Academia and EDI The Unseen Architects of Cancer's Destruction: Fibroblasts and Cachexia Decoupling Strategies in Electrochemical Water Splitting Making Nuclear Energy Competitive Again Title TBC CSAR Lecture: Biopharmaceutical Development - The Journey from Molecule to Medicine |