University of Cambridge > > Inference Group > Differential Privacy and Probabilistic Inference

Differential Privacy and Probabilistic Inference

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Emli-Mari Nel.

Differential privacy is a recent privacy definition that permits only indirect observation of data held in a database through noisy measurement. I will show that there is a strong connection between differential privacy and probabilistic inference. Previous research on learning from data protected by differential privacy has been driven by researchers inventing sophisticated learning algorithms which are applied directly to the data and output model parameters which can be proven to respect the privacy of the data set. Proving these privacy properties requires an intricate analysis of each algorithm on a case-by-case basis. While this does result in many valuable algorithms and results, it is not a scalable solution for two reasons: first, to solve a new learning problem, one must invent and analyze a new privacy-preserving algorithm; second, one must then convince the owner of the data to run this algorithm. Both of these steps are challenging. In contrast, I will consider the potential of applying probabilistic inference to the measurements and measurement process to derive posterior distributions over the data sets and model parameters thereof, showing that for the models investigated, probabilistic inference can improve accuracy, integrate multiple observations, measure uncertainty, and even provide posterior distributions over quantities that were not directly measured.

This talk is part of the Inference Group series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity