COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > Bias Mitigation in the Wild: Challenges and Opportunities
Bias Mitigation in the Wild: Challenges and OpportunitiesAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Mateja Jamnik. Deep neural networks trained via empirical risk minimisation often exhibit significant performance disparities across groups, particularly when group and task labels are spuriously correlated (e.g., “grassy background” and “cows”). In this talk, I will first argue that previously proposed bias mitigation methods that aim to address this issue often either rely on group labels for training or validation, or require an extensive hyperparameter search, even when this assumption is not explicitly made. Such data and computational requirements hinder the practical deployment of these methods, especially when datasets are too large to be group-annotated, computational resources are limited, and models are trained through already complex pipelines. With this in mind, I will outline some of the challenges that may need to be addressed to design practical bias mitigation methods. Then, I will describe Targeted Augmentations for Bias mitigation (TAB), a new approach that considers these design principles. I will conclude by showing how TAB , a simple hyperparameter-free framework that leverages the entire training history of a helper model to identify spurious samples, improves worst-group performance without any group information or model selection. This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsEconomic orthodoxy and barriers to the low-carbon economy Coronavirus the Pandemic DIAL talks, Institute for ManufacturingOther talksEntropy contraction of the Gibbs sampler under log-concavity Quantum Groups Adaptive Tokenization and Memory in Foundation Models Re-enacting past experiments: how and why Statistics Clinic Michaelmas 2024 V Reading Hegel in the Twenty-First Century |