COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Machine Learning @ CUED > The Grammar Variational Autoencoder & Counterfactual Fairness
The Grammar Variational Autoencoder & Counterfactual FairnessAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact zoubin-office. In this talk I’ll be covering two research directions I’m really excited about. The first is on improving deep generative models for discrete data using grammars, and the second is on using causality to ensure that machine learning predictions aren’t discriminatory. In the first half of the talk I will describe how generative modeling of discrete data such as arithmetic expressions and molecular structures still poses significant challenges. Crucially, state-of-the-art methods often produce outputs that are not valid. We make the key observation that frequently, discrete data can be represented as a parse tree from a context-free grammar. We propose a variational autoencoder which directly encodes from and decodes to these parse trees, ensuring the generated outputs are always syntactically valid. Surprisingly, we show that not only does our model more often generate valid outputs, it also learns a more coherent latent space in which nearby points decode to similar discrete outputs. We demonstrate the effectiveness of our learned models by showing their improved performance in Bayesian optimization for symbolic regression and molecule generation. In the second half of the talk, I will detail how machine learning is now being used in settings where previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school and on identifying discrimination in stop-and-frisk data. This talk is part of the Machine Learning @ CUED series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsSelf Leadership&Self Management Centre for Science and Policy Distinguished Lecture Series Neuropsychological Rehabilitation SeminarsOther talksGiant response of weakly driven systems Mothers & Daughters: a psychoanalytical perspective Surrogate models in Bayesian Inverse Problems Making a Crowdsourced Task Attractive: Measuring Workers Pre-task Interactions |