University of Cambridge > Talks.cam > NLIP Seminar Series > Title to be confirmed

Title to be confirmed

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Suchir Salhan.

The goal of Mechanistic Interpretability research is to explain how neural networks compute outputs in terms of their internal components. But how much progress has been made towards this goal? While a large amount of Mechanistic Interpretability research has been produced by academia, frontier AI companies such as Google DeepMind and independent researchers in recent years, there are still large open problems in the field. In this talk, I will begin by discussing some background hypotheses and techniques in Mechanistic Interpretability, such as the Linear Representation Hypothesis and common causal interventions. Then, I’ll discuss how this connects to research we’ve done at Google DeepMind in the past year, such as open sourcing Gemma Scope, the most comprehensive set of Sparse Autoencoders, which took over 20% of the compute used to train GPT -3. Finally, I’ll reflect on current priorities and disagreements in Mechanistic Interpretability, several of which are built from Gemma Scope. In short, Mechanistic Interpretability is able to uncover factors influencing model behavior that cannot naively be inferred from prompts and outputs via circuits research, but Mechanistic Interpretability has thus far underperformed when benchmarked on well-defined real-world tasks (such as probing for harmful intent in user prompts).

Arthur Conmy is a Senior Research Engineer at Google DeepMind who works on the Mechanistic Interpretability team.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity