BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Artificial Intelligence Research Group Talks (Comp
 uter Laboratory)
SUMMARY:A Weighting Based Adversarial Approach to Fairness
  in Machine Learning - Mladen Nikolić\, University
  of Belgrade
DTSTART;TZID=Europe/London:20200623T131500
DTEND;TZID=Europe/London:20200623T141500
UID:TALK148777AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/148777
DESCRIPTION:"ONLINE link.":https://teams.microsoft.com/l/meetu
 p-join/19%3ameeting_MWJhYjYwY2ItNzkyMS00ZDRkLWI5ZT
 MtMzM3Y2E5NDg0MTA4%40thread.v2/0?context=%7b%22Tid
 %22%3a%2249a50445-bdfa-4b79-ade3-547b4f3986e9%22%2
 c%22Oid%22%3a%22760af26a-1349-4870-a967-af40fbad85
 e9%22%7d\n\nSuccess of machine learning in real-wo
 rld applications raises various\nissues related to
  its societal impact. One of them is that algorith
 ms\ntend to learn biases towards certain groups of
  population if such\nbiases are present in data fr
 om which the model is learnt. There are\ndifferent
  approaches to tackling unfairness that such model
 s can\nproduce. One notable approach is based on i
 nstance weighting so as to\ndecrease the impact of
  more biased instances. However\, the weighting\ni
 s performed in preprocessing and is oblivious to t
 he properties of\nmodel and loss used in training.
  Another one is based on adversarial\ntraining and
  learns fair representation of the data based on w
 hich\npredictive models make their decisions. We d
 escribe an adversarial\napproach based on weightin
 g function learnt in training time aiming to\ntake
  the best from both worlds: our weights are interp
 retable\nindicators of fairness of individual inst
 ances\, but are learnt in\ncontext of specific mod
 el and loss function. In order to explore\ndiffere
 nt probabilistic assumptions for the weights\, we 
 rely on\nvariational inference and sampling. Exper
 imental comparison with\nseveral existing methods 
 on four real world-datasets shows\nstate-of-the-ar
 t performance of the proposed method.
LOCATION:Online on Teams
CONTACT:Mateja Jamnik
END:VEVENT
END:VCALENDAR
