University of Cambridge > Talks.cam > Women@CL Events > Easter Talklets: Agnieszka and Lorena

Easter Talklets: Agnieszka and Lorena

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Agnieszka Slowik.

Speaker 1: Agnieszka Słowik

Title: Learning from multiple distributions

Abstract: Machine learning has proven extremely useful in many applications in recent years. However, a lot of these success stories stem from evaluating the algorithms on data very similar to that they were trained on. When applied to a new data distribution (for instance, if the demographic group of users changes), machine learning algorithms fail. In this talk, I focus on the approach for achieving generalisation based on learning from multiple data distributions. The presented research contribution is twofold: 1) I present a new dataset for evaluating out-of-distribution generalisation and 2) I state a new theoretical result regarding the capabilities of Distributionally Robust Optimisation, and show how this result leads to practical recommendations. The talk is based on my two recent papers: Linear unit-tests for invariance discovery and Algorithmic Bias and Data Bias: Understanding the Relation between Distributionally Robust Optimization and Data Curation.

Speaker 2: Lorena Qendro

Title: A Probabilistic Approach Towards Training-Free Adversarial Defense in Quantized CNNs

Abstract: Quantized neural networks (NN) are the common standard to efficiently deploy deep learning models on tiny hardware platforms. However, we notice that quantized NNs are as vulnerable to adversarial attacks as the full-precision models. With the proliferation of neural networks on small devices that we carry or surround us, there is a need for efficient models without sacrificing trust in the prediction in presence of malign perturbations. Current mitigation approaches often need adversarial training or are bypassed when the strength of adversarial examples is increased. In this talk, I will present a probabilistic framework that would assist in overcoming the aforementioned limitations for quantized deep learning models. We will see that it is possible to jointly achieve efficiency and robustness by accurately enabling each module of the framework without the burden of re-retraining or ad hoc fine-tuning.

This talk is part of the Women@CL Events series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity