University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Are we making progress in unlearning? 

Are we making progress in unlearning? 

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Xianda Sun.

Machine unlearning is the problem of removing the influence of a subset of training data from machine learning models. This problem is enjoying increased attention recently due to excitement around using this technology to remove outdated, harmful, private or no-longer-permissible data from trained models, in order to increase their accuracy, safety, or protect privacy. A straightforward solution to the problem is to remove the unwanted data from the training set and retrain a new model from scratch. However, that solution is inefficient and impractical, especially in the era of increasingly large models that are increasingly expensive to train. Can we instead cause models to “forget” a subset of their training data after the fact? While this problem has close ties to many research areas, including continual learning, transfer learning and privacy, machine unlearning is still at its infancy, with many open questions remaining, both in how to evaluate success as well as how to improve upon existing methods. In this talk, I will discuss recent progress and challenges remaining, highlighting open questions and important directions for the community.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity