BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:A Weighting Based Adversarial Approach to Fairness in Machine Lear
 ning - Mladen Nikolić\, University of Belgrade
DTSTART:20200623T121500Z
DTEND:20200623T131500Z
UID:TALK148777@talks.cam.ac.uk
CONTACT:Mateja Jamnik
DESCRIPTION:"ONLINE link.":https://teams.microsoft.com/l/meetup-join/19%3a
 meeting_MWJhYjYwY2ItNzkyMS00ZDRkLWI5ZTMtMzM3Y2E5NDg0MTA4%40thread.v2/0?con
 text=%7b%22Tid%22%3a%2249a50445-bdfa-4b79-ade3-547b4f3986e9%22%2c%22Oid%22
 %3a%22760af26a-1349-4870-a967-af40fbad85e9%22%7d\n\nSuccess of machine lea
 rning in real-world applications raises various\nissues related to its soc
 ietal impact. One of them is that algorithms\ntend to learn biases towards
  certain groups of population if such\nbiases are present in data from whi
 ch the model is learnt. There are\ndifferent approaches to tackling unfair
 ness that such models can\nproduce. One notable approach is based on insta
 nce weighting so as to\ndecrease the impact of more biased instances. Howe
 ver\, the weighting\nis performed in preprocessing and is oblivious to the
  properties of\nmodel and loss used in training. Another one is based on a
 dversarial\ntraining and learns fair representation of the data based on w
 hich\npredictive models make their decisions. We describe an adversarial\n
 approach based on weighting function learnt in training time aiming to\nta
 ke the best from both worlds: our weights are interpretable\nindicators of
  fairness of individual instances\, but are learnt in\ncontext of specific
  model and loss function. In order to explore\ndifferent probabilistic ass
 umptions for the weights\, we rely on\nvariational inference and sampling.
  Experimental comparison with\nseveral existing methods on four real world
 -datasets shows\nstate-of-the-art performance of the proposed method.
LOCATION:Online on Teams
END:VEVENT
END:VCALENDAR
