University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > Adversarial Explanations - You Shouldn't Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods

Adversarial Explanations - You Shouldn't Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mateja Jamnik.

ONLINE link.

Transparency of algorithmic systems has been discussed as a way for end-users and regulators to develop appropriate trust in machine learning models. One popular approach, LIME (Ribeiro et al 2016) even suggests that model explanations can answer the question ``Why should I trust you?’’ Here we show a straightforward method for modifying a pre-trained model to manipulate the output of many popular feature importance explanation methods with little change in accuracy, thus demonstrating the danger of trusting such explanation methods. We show how this explanation attack can mask a model’s discriminatory use of a sensitive feature, raising strong concerns about using such explanation methods to check model fairness.

This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity