University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > Vibe checks and red teaming: why ML researchers are increasingly reverting to manual evaluation

Vibe checks and red teaming: why ML researchers are increasingly reverting to manual evaluation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mateja Jamnik.

There is a curious trend in machine learning (ML): researchers developing the most capable large language models (LLMs) increasingly evaluate them using manual methods such as red teaming. In red teaming, researchers hire workers to manually try to break the LLM in some form by interacting with it. Similarly, some users pick their preferred LLM assistant by manually trying out various models – checking each LLM ’s “vibe”. Considering that LLM researchers and users both actively seek to automate all sorts of other tasks, red teaming and vibe checks are surprisingly manual evaluation processes. This trend towards manual evaluation hints at fundamental problems that prevent more automatic evaluation methods, such as benchmarks, to be used effectively for LLMs. In this talk, I aim to give an overview of the problems preventing LLM benchmarks from being a fully satisfactory alternative to more manual approaches.

You can also join us on Zoom

This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity