University of Cambridge > > Engineering Safe AI > Motivation for this group, Goodhart's Law

Motivation for this group, Goodhart's Law

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact AdriĆ  Garriga Alonso.

How can we design AI systems that reliably act according to the true intent of their users, even as the capability of the systems increases?

Come to this reading group with free pizza! This week we will get started by motivating why we are doing this. In part, this is Goodhart’s Law [1] and its implications for evaluating AI systems, and designing their objectives.

The session will go as follows. At 17:00, we will start reading the material (see bottom), mostly individually. At 17:30, the discussion leader will start going through the paper, making sure everyone understands, and encouraging discussion about its contents and implications.

A basic understanding of machine learning is helpful, but detailed knowledge of the latest techniques is not required. Each session will have a brief recap of immediate necessary knowledge. The goal of this series is to get people to know more about the existing work in AI research, and eventually contribute to the field.

Join the mailing list (, the Facebook group ( or the page ( Announcements about the week’s topic and other events will be sent there. Consider also inviting your friends!


“Building safe artificial intelligence: specification, robustness, and assurance” (2018), by Pedro A. Ortega, Vishal Maini, and the DeepMind safety team

“On the folly of rewarding A, while hoping for B” (1975), by Steven Kerr

“Categorizing Variants of Goodhart’s Law” (2018), by David Manheim and Scott Garrabrant (arXiv

If you have already read the material in your own time, feel free to come by at 17:30.


This talk is part of the Engineering Safe AI series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity