COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Machine Learning @ CUED > Variational Bayes In Private Settings
Variational Bayes In Private SettingsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Alessandro Davide Ialongo. This talk has been canceled/deleted Bayesian methods are frequently used for analysing privacy-sensitive datasets, including medical records, emails, and educational data, and there is a growing need for practical Bayesian inference algorithms that protect the privacy of individuals’ data. To this end, we provide a general framework for privacy-preserving variational Bayes (VB) for a large class of probabilistic models, called the conjugate exponential (CE) family. Our primary observation is that when models are in the CE family, we can privatise the variational posterior distributions simply by perturbing the expected sufficient statistics of the complete-data likelihood. For widely used non-CE models with binomial likelihoods (e.g., logistic regression), we exploit the Polya-Gamma data augmentation scheme to bring such models into the CE family, such that inferences in the modified model resemble the original (private) variational Bayes algorithm as closely as possible. The iterative nature of variational Bayes presents a further challenge for privacy preservation, as each iteration increases the amount of noise needed. We overcome this challenge by combining: (1) a relaxed notion of differential privacy, called concentrated differential privacy, which provides a tight bound on the privacy cost of multiple VB iterations and thus significantly decreases the amount of additive noise; and (2) the privacy amplification effect of subsampling mini-batches from large-scale data in stochastic learning. We empirically demonstrate the effectiveness of our method in CE and non-CE models including latent Dirichlet allocation (LDA), Bayesian logistic regression, and Sigmoid Belief Networks (SBNs), evaluated on real-world datasets. Speaker Bio: Mijung Park completed her Ph.D. in the department of Electrical and Computer Engineering under the supervision of Prof. Jonathan Pillow (now at Princeton University) and Prof. Alan Bovik at The University of Texas at Austin. She worked with Prof. Maneesh Sahani as a postdoc at the Gatsby computational neuroscience unit at University College London. Currently, she works with Prof. Max Welling as a postdoc in the informatics institute at University of Amsterdam. Her research focuses on developing practical algorithms for privacy preserving data analysis. Previously, she worked on a broad range of topics including approximate Bayesian computation (ABC), probabilistic manifold learning, active learning for drug combinations and neurophysiology experiments, and Bayesian structure learning for sparse and smooth high dimensional parameters. This talk is part of the Machine Learning @ CUED series. This talk is included in these lists:This talk is not included in any other list Note that ex-directory lists are not shown. |
Other listsNewnham College MCR Speaker Series Cambridge Experimental and Behavioural Research Group (CEBEG) Cambridge Parasitology Club Meetings 2012-13 Axonal degeneration and repair: plasticity and stem cells Natural History Cabinet, Cambridge University Department of History and Philosophy of Science ReproSocOther talksCANCELLED: Alex Goodall: The US Marine Empire in the Caribbean and Central America, c.1870-1920 Poland, Europe, Freedom: A Personal Reflection on the Last 40 Years Flow Cytometry ADMM for Exploiting Structure in MPC Problems Short-Selling Restrictions and Returns: a Natural Experiment The role of Birkeland currents in the Dungey cycle |