University of Cambridge > Talks.cam > Machine Learning @ CUED > Differentially Private Bayesian Learning

Differentially Private Bayesian Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Alexander Matthews.

Many applications of machine learning for example in health care would benefit from methods that can guarantee data subject privacy.Differential privacy has recently emerged as a leading framework for private data analysis. Differential privacy guarantees privacy by requiring that the results of an algorithm should not change much even if one data point is changed, thus providing plausible deniability for the data subjects.

In this talk I will present methods for efficient differentially private Bayesian learning. In addition to asymptotic efficiency, we will focus on how to make the methods efficient for moderately-sized data sets. The methods are based on perturbation of sufficient statistics for exponential family models and perturbation of gradients for variational inference. Unlike previous state-of-the-art, our methods can predict drug sensitivity of cancer cell lines using differentially private linear regression with better accuracy than using a very small non-private data set.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity