University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Cooperative Inverse RL

Cooperative Inverse RL

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Alessandro Davide Ialongo.

Abstract:

The value alignment problem consists in ensuring the values of an AI system align with the values of its operator. A potential solution to this problem is formalised as the Inverse Reinforcement Learning (IRL) setting. In IRL , the goal is to infer the reward function of an agent (a human), just from observing its behaviour in the environment. In Cooperative IRL , the agents are allowed to interact. From this, more effective teaching strategies than passive observation emerge. We will talk about formalising this problem, and an algorithm to approximate good teaching strategies.

Recommended reading:

Main Paper: Optional background reading on Inverse Reinforcement Learning:

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity