COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Engineering Safe AI > Measuring and avoiding side effects using relative reachability
Measuring and avoiding side effects using relative reachabilityAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Adrià Garriga Alonso. This week we will read “Measuring and avoiding side effects using relative reachability”, by Krakovna, Orseau, Martic & Legg from DeepMind. The authors devise a measure of how much “impact” an agent has on the world, in order to get as few unintended consequences from unspecified parts of the objective as possible. For example, we might want a cleaning robot to avoid breaking a vase. Or, if we have a self-driving car with the goal of getting us from A to B observing traffic regulations, we want it to avoid running over small animals that might be crossing the road, even though this is hardly ever the case of “self-driving car failure” that comes to people’s minds. There will be free pizza. At 17:00, we will start reading the paper, mostly individually. At 17:30, the discussion leader will start going through the paper, making sure everyone understands, and encouraging discussion about its contents and implications. Even if you think you cannot contribute to the conversation, you should give it a try. Last year we had several people from non-computer-y backgrounds, and others who hadn’t thought about alignment before, that ended up being essential. If you have already read the paper in your own time you can come in time for the discussion. A basic understanding of machine learning is helpful, but detailed knowledge of the latest techniques is not required. Each session will have a brief recap of immediate necessary knowledge. The goal of this series is to get people to know more about the existing work in AI research, and eventually contribute to the field. Invite your friends to join the mailing list (https://lists.cam.ac.uk/mailman/listinfo/eng-safe-ai), the Facebook group (https://www.facebook.com/groups/1070763633063871) or the talks.cam page (https://talks.cam.ac.uk/show/index/80932). Details about the next meeting, the week’s topic and other events will be advertised in these places. This talk is part of the Engineering Safe AI series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsInstitute of Theoretical Geophysics Informal Lunchtime Seminars (DAMTP) Type the title of a new list here WiSETIOther talksCell cycle controls enforcing asymmetric spindle pole fate in budding yeast The number of symbols that forces a transversal Knowledge Democracy and Educational Action Research Living archives and dying wards: ethical records preservation at the Uganda Cancer Institute Academic Enterprise, Public Engagement and Online Media |