University of Cambridge > > Computer Laboratory Security Seminar > Towards Meaningful Stochastic Defences in Machine Learning

Towards Meaningful Stochastic Defences in Machine Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Kieron Ivy Turk.

Machine learning (ML) has proven to be more fragile than previously thought, especially in adversarial settings. A capable adversary can cause ML systems to break at training, inference, and deployment stages. In this talk, I will cover the recent work on attacking and defending machine learning pipelines using stochastic defences; I will describe how, seemingly powerful defences fail to provide any security and end up being vulnerable to even standard attackers. I will then demonstrate a number of possible randomness-based defences that can provide theoretical and practical performance improvements.

Bio: Ilia Shumailov holds a PhD in Computer Science from University of Cambridge, specialising in Machine Learning and Computer Security. During the PhD under the supervision of Prof Ross Anderson Ilia has worked on a number of projects spanning the fields of machine learning security, cybercrime analysis and signal processing. Following the PhD, Ilia joined Vector Institute in Canada as a Postdoctoral Fellow, where he worked under the supervision of Prof Nicolas Papernot and Prof Kassem Fawaz. Ilia is currently a Junior Research Fellow at Christ Church, University of Oxford.

This talk is part of the Computer Laboratory Security Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity