University of Cambridge > Talks.cam > Computer Laboratory Security Seminar > The Unlearning Problem(s)

The Unlearning Problem(s)

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Hridoy Sankar Dutta.

The talk presents challenges facing the study of machine unlearning. The need for machine unlearning, i.e., obtaining a model one would get without training on a subset of data, arises from privacy legislation and as a potential solution to data poisoning. The first part of the talk discusses approximate unlearning and the metrics one might want to study. We highlight methods for two desirable (though often disparate) notions of approximate unlearning. The second part departs from this line of work by asking if we can verify unlearning. Here we show how an entity can claim plausible deniability, and conclude that at the level of model weights, being unlearnt is not always a well-defined property.

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

This talk is part of the Computer Laboratory Security Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity