University of Cambridge > > Computer Laboratory Security Seminar > Machine Learning in context of Computer Security

Machine Learning in context of Computer Security

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Kieron Ivy Turk.

This talk has been canceled/deleted

Machine learning (ML) has proven to be more fragile than previously thought, especially in adversarial settings. A capable adversary can cause ML systems to break at training, inference, and deployment stages. In this talk, I will cover my recent work on attacking and defending machine learning pipelines; I will describe how, otherwise correct, ML components end up being vulnerable because an attacker can break their underlying assumptions. First, with an example of attacks against text preprocessing, I will discuss why a holistic view of the ML deployment is a key requirement for ML security. Second, I will describe how an adversary can exploit the computer systems, underlying the ML pipeline, to develop availability attacks at both training and inference stages. At the training stage, I will present data ordering attacks that break stochastic optimisation routines. At the inference stage, I will describe sponge examples that soak up a large amount of energy and take a long time to process. Finally, building on my experience attacking ML systems, I will discuss developing robust defenses against ML attacks, which consider an end-to-end view of the ML pipeline.

Zoom details:

Meeting ID: 883 3101 5387 , Passcode: 399338

RECORDING : Please note, this event will be recorded and will be available after the event for an indeterminate period under a CC BY -NC-ND license. Audience members should bear this in mind before joining the webinar or asking questions.

This talk is part of the Computer Laboratory Security Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

This talk is not included in any other list

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity