University of Cambridge > > Engineering Safe AI > Scaling inverse reinforcement learning for human-compatible AI

Scaling inverse reinforcement learning for human-compatible AI

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact AdriĆ  Garriga Alonso.

Inverse reinforcement learning (IRL) is a technique for inferring human preferences from demonstrations of a target behaviour. Classical approaches make strong assumptions on human rationality, are designed for only a single agent and do not scale to high-dimensional environments. In this talk, Adam Gleave will discuss recent work by himself and collaborators scaling inverse reinforcement learning to video games, demonstrations from multiple users with differing preferences, and the very hard problem of learning from users with cognitive biases. The talk will be based on Inverse reinforcement learning for video games, Multi-task Maximum Causal Entropy Inverse Reinforcement Learning and Inferring Reward Functions from Demonstrators with Unknown Biases.

Adam is a PhD student at UC Berkeley working with Stuart Russell in the Center for Human-Compatible AI. After his talk, we will have time to discuss with him how he started working in alignment, what are the most promising approaches and ways of getting involved, and more.

There will definitely be snacks and pizza.

This talk is part of the Engineering Safe AI series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2018, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity