University of Cambridge > > Churchill CompSci Talks > Adversarial Machine Learning

Adversarial Machine Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Matthew Ireland.

An adversarial example is an instance of input data which has been modified in such a way that a human observer would not see the difference, but a Machine Learning model would be tricked into misclassifying it. In this talk, we are going to see how such examples could affect some Neural Network models by compromising their integrity. For example, a person could ‘paint’ a STOP sign in such a way that a self-driving car would interpret it as something completely different. We are going to explore how easy it is to generate images which will be misclassified by state-of-the-art architectures. Afterwards, we will look into currently available defences and how one can employ a transferability attack to bypass them. The talk will conclude by comparing testing with verification, seeing how each is used to appreciate the security of a Neural Network architecture.

This talk is part of the Churchill CompSci Talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity