University of Cambridge > Talks.cam > Wednesday Seminars - Department of Computer Science and Technology  > Making Large Language Models Safe: A Case Study of Llama2

Making Large Language Models Safe: A Case Study of Llama2

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Ben Karniely.

Large Language Models (LLMs) have seen a lot of interest from all over the world, especially since ChatGPT became the fastest growing consumer internet app in history. As we enter a new era of possibilities with AI, new challenges also present themselves. In July of 2023, Meta open-sourced the largest language models to date, making it one of the most important moments in the development of AI. Llama2 was the first LLM of its size and capabilities to be open-sourced; both the base LLM as well as a version fine-tuned for chat were released publicly for researchers to industry practitioners to leverage. In this talk, I will recap the journey of making Llama2 models safe and robust against misuse in hate speech, misinformation, etc. The talk will cover the technical details of how we defined what is safety for an LLM , the strategies we leveraged to train and fine-tune the models towards being safe, and the evaluations we conducted to verify that we had the level of safety we desired. I will also discuss the challenges that remain, and what the possible directions to address those are.

Link to join virtually: https://cam-ac-uk.zoom.us/j/81322468305

This talk is not being recorded.

This talk is part of the Wednesday Seminars - Department of Computer Science and Technology series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity