University of Cambridge > Talks.cam > Machine Learning @ CUED > Observation and Intervention Incentives in Causal Influence Diagrams: Towards an Understanding of Powerful Machine Learning Systems

Observation and Intervention Incentives in Causal Influence Diagrams: Towards an Understanding of Powerful Machine Learning Systems

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact AdriĆ  Garriga Alonso.

As machine learning systems gain in capability and complexity, understanding their incentives will become increasingly important. In this paper, we model their objectives and environment interaction in graphical models called influence diagrams. This allows us to answer two fundamental questions about the incentives of a machine learning system directly from the graphical representation: (1) which nodes would the system like to observe in addition to its observations, and (2) which nodes would the system like to control in addition to its actions? The answers tell us which information and influence points need extra protection, and have applications to fairness and reward tampering. For example, we may want a classifier for job applications to not use the race of the candidate, and a reinforcement learning agent not to take direct control of its reward mechanism. Different algorithms and training paradigms can lead to different influence diagrams, so our results can help designing algorithms with less problematic observation and intervention incentives.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity