University of Cambridge > > Artificial Intelligence Research Group Talks (Computer Laboratory) > A Weighting Based Adversarial Approach to Fairness in Machine Learning

A Weighting Based Adversarial Approach to Fairness in Machine Learning

Add to your list(s) Download to your calendar using vCal

  • UserMladen Nikolić, University of Belgrade
  • ClockTuesday 23 June 2020, 13:15-14:15
  • HouseOnline on Teams.

If you have a question about this talk, please contact Mateja Jamnik.

ONLINE link.

Success of machine learning in real-world applications raises various issues related to its societal impact. One of them is that algorithms tend to learn biases towards certain groups of population if such biases are present in data from which the model is learnt. There are different approaches to tackling unfairness that such models can produce. One notable approach is based on instance weighting so as to decrease the impact of more biased instances. However, the weighting is performed in preprocessing and is oblivious to the properties of model and loss used in training. Another one is based on adversarial training and learns fair representation of the data based on which predictive models make their decisions. We describe an adversarial approach based on weighting function learnt in training time aiming to take the best from both worlds: our weights are interpretable indicators of fairness of individual instances, but are learnt in context of specific model and loss function. In order to explore different probabilistic assumptions for the weights, we rely on variational inference and sampling. Experimental comparison with several existing methods on four real world-datasets shows state-of-the-art performance of the proposed method.

This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity