University of Cambridge > > Machine Learning @ CUED > Cooperative Inverse Reinforcement Learning

Cooperative Inverse Reinforcement Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Adrian Weller.

In order to realize the benefits of artificial intelligence, we need to ensure that autonomous systems reliably pursue the objectives that their designers and users intend. This is the crux of the value alignment problem: jointly determining another agent’s preferences and selecting actions in accordance with those preferences. Good strategies for value alignment have implications for the utility and value of consumer robotics and AI in the short term and the potential exeistential risk from superhuman intelligence in the long run. In this talk, I will present a Cooperative Inverse Reinforcement Learning (CIRL), a novel mathematical framework for value alignment. The core of the approach relies on framing the problem as a game between two players (a human and a robot) with a shared objective and asymmetric information about that objective. I will give an overview of the model and present recent work that leverages this framework to analyze the incentives an agent has to defer to a human decision, likely failure modes of misspecified preference models, and ways to avoid accidental incentivizes for undesireable side effects.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity